code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Overview
Ensemble combined with LDA is effective in predicting age based on gene expression data. However, this method is prone to batch problem. The batch problem may caused by the different techniques in breeding cells that lead to difference in the mean gene expression level of cells between batches.
kTSP is a potential solution to the batch problem because it uses the relative gene expression level rather than the raw qualititative data. It also involves significant less amount of features, improving both the speed of training and predicting and the interpretability of the features.
The data used for training in this notebook are all from gene_labelled_data.csv
# KTSP classifier and feature selection
This notebook completes three tasks: 10-fold CV repetead 3 times for predicting age using kTSP classifer on entire data; feature selection of gene pairs by kTSP algorithm to use in ensemble; and predict age on data reserved from feature selection using kTSP classifier.
# Set-up
rpy2 version: 3.4.5
\
R version: 4.1.0
\
multiClassPairs version: 0.4.3
\
switchBox version: 1.28.0
\
python version: 3.8.8
\
scikit-learn version: 0.24.1
\
seaborn version: 0.11.1
```
%run age_predictors.py
#Importing interfaces and packages
#Activate pandas2ri, numpy2ri for automatic transformation of python data structure to R
import pandas as pd
import rpy2
import rpy2.robjects as robjects
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
import rpy2.robjects.packages as rpackages
from rpy2.robjects.vectors import StrVector
from sklearn.model_selection import train_test_split
pandas2ri.activate()
from rpy2.robjects import r
from sklearn.utils import shuffle
import rpy2.robjects.numpy2ri
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
rpy2.robjects.numpy2ri.activate()
print(rpy2.situation.get_r_home())
print(rpy2.__version__)
#Import R biocManager packages
base = importr('base')
switchBox = importr('switchBox')
multiclass = importr('multiclassPairs')
#This method loads the data
#Set uid, age, meta as indices
def load_data(filename, transpose=False):
ending = filename.split('.')[-1]
if ending == 'csv':
data = pd.read_csv(filename,header=None,index_col=None)
elif ending == 'xlsx':
data = pd.read_xlsx(filename,header=None,index_col=None)
else:
raise TypeError('dont know what file type this is')
if transpose:
data = data.T
# make sure the index columns are named correctly,
# otherwise use whatever the first row (header) contains for gene/transcript names
cols = data.iloc[0,:]
cols[0] = 'uid'
cols[1] = 'age'
cols[2] = 'meta'
data.columns = cols
# get the data, not the header now that we formed it
data = data.iloc[1:,:]
# make sure the age comes in as integer years... if you need to do floating point change this
data.iloc[:,1] = data.iloc[:,1].astype(int)
data = data.set_index(['uid','age','meta']).astype(float)
return data
data = load_data('gene_labelled_data.csv')
crossval = RepeatedKFold(n_repeats=3, n_splits=10)
#r_train_data = robjects.conversion.py2rpy(train_data.transpose())
#TODO: Modify to get lables later
def get_labels(data_for_labels):
labels_age = data_for_labels.index.get_level_values('age').values
#print(len(data_for_labels))
#display(train_labels_age)
labels = []
#ages = labels_age[train]
for age in labels_age:
if age>=1 and age<20:
labels.append('age_1_20')
elif age>=20 and age<40:
labels.append('age_20_40')
elif age>=40 and age<60:
labels.append('age_40_60')
elif age>=60 and age<80:
labels.append('age_60_80')
elif age>=80 and age<100:
labels.append('age_80_100')
#display(train_labels)
labels = np.array(labels)
print(len(labels))
return labels
#r_train_labels = robjects.conversion.py2rpy(train_labels)
def get_age(results):
pred = []
for result in results:
pred.append(result[5])
return pred
#From stackoverflow
def plot_confusion_matrix(true, pred):
#Convert to a method
cm = confusion_matrix(true, pred)
labels = ['age_1_20', 'age_20_40', 'age_40_60', 'age_60_80', 'age_80_100']
fig = plt.figure(figsize=(8, 7))
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, fmt = 'g');
# labels, title and ticks
ax.set_xlabel('Predicted', fontsize=20)
ax.xaxis.set_label_position('bottom')
plt.xticks(rotation=90)
ax.xaxis.set_ticklabels(labels, fontsize = 10)
ax.xaxis.tick_bottom()
ax.set_ylabel('True', fontsize=20)
ax.yaxis.set_ticklabels(labels, fontsize = 10)
plt.yticks(rotation=0)
plt.title('Refined Confusion Matrix', fontsize=20)
plt.savefig('ConMat24.png')
plt.show()
```
# Part I: kTSP classifier on entire data
This segment of code implements an entire kTSP classifier. It trains and predicts on the entire dataset by 10-fold CV repeated 3 times. It generates a graph that demonstrate the confusion matrix of the predicted and true classes.
Notice that kTSP is a classification algorithm, so quantative predictions of age is not generated. The sample ages are divided into 5 classes: 0-20, 20-40, 40-60, 60-80, and 80-100 year old. The predictions fall into the 5 classes listed.
```
#Initialize empty data structure to store predicted and true age classes
true_age = []
pred_age = []
#10-fold CV repeated 3 times
for train, test in crossval.split(data):
#Get the labels for training
train_labels = get_labels(data.iloc[train,:])
#Save true age classes
true_age.append(get_labels(data.iloc[test,:]))
#Get test data
test_data = data.iloc[test,:]
#Create object for training kTSP classifier
object = r['ReadData']((data.iloc[train,:]).transpose(), train_labels)
#Filter genes and train kTSP classifier to get representative gene pairs
filtered_genes = r['filter_genes_TSP'](data_object = object, filter = "one_vs_rest",
platform_wise = False, featureNo = 1000, UpDown = True)
classifier = r['train_one_vs_rest_TSP'](data_object = object, filtered_genes = filtered_genes,
include_pivot = False,one_vs_one_scores = False,
platform_wise_scores = False,seed = 1234, verbose = False)
#Use one-vs-rest scheme to do multiclassification using kTSP
raw_results_test = r['predict_one_vs_rest_TSP'](classifier = classifier, Data = test_data.transpose(),
tolerate_missed_genes = False, weighted_votes = True)
#Unravel R data structue to get the prediction of age class
results_test = get_age(raw_results_test)
pred_age.append(results_test)
#Plot the confusion matrix
pred = flatten(pred_age)
true = flatten(true_age)
plot_confusion_matrix(true, pred)
```
# Part II: Feature selection using kTSP algorithm
This segment of code completes feature selection for the ensemble using the filter method. 33 samples are chosen randomly from the entire sample. kTSP algorithm is performed on the 33 sample to select representative gene expression pairs.
A new csv file is generated based on the features selected. Each pair is a column of the csv file in the form "geneA>geneB". For each person in the reserved 100 samples, if the expression of geneA is greater than geneB, 1 is put in the "geneA>geneB" column. Otherwise, 0 is put in the "geneA>geneB" column. Following this rule, a binary expression matrix is generated.
The csv file containing the binary expression matrix will be put into the ensemble. (However, the binary expression matrix does not work well with LDA ensemble. Possible reason is that the binary expression matrix does not keep the Gaussian distribution of the data. Another possible reason is that too few samples used in feature selection, so gene pairs that are not representative of age are chosen.)
```
#Prepare data for filter method
data = shuffle(data)
#Get train data and feature selection set
train = range(33,133)
FS = range(0,33)
FS_labels = get_labels(data.iloc[FS,:])
#FS_age = (get_labels(data.iloc[FS,:]))
FS_data = data.iloc[FS,:]
#Get reserved data
U_data = data.iloc[train, :]
display(FS_data)
np.unique(U_data.dtypes, return_counts=True)
U_data.index.get_level_values('meta')
#Train the kTSP classifier
object = r['ReadData'](FS_data.transpose(), FS_labels)
filtered_genes = r['filter_genes_TSP'](data_object = object, filter = "one_vs_rest", platform_wise = False, featureNo = 1000, UpDown = True)
classifier = r['train_one_vs_rest_TSP'](data_object = object, filtered_genes = filtered_genes,
include_pivot = False,one_vs_one_scores = False,
platform_wise_scores = False,seed = 1234, verbose = False)
#From Stackoverflow:
#This method unravels the kTSP classifier that is stored as convoluted R vector.
def r_list_to_py_dict(r_list):
converted = {}
for name in r_list.names:
val = r_list.rx(name)[0]
if isinstance(val, robjects.vectors.DataFrame):
converted[name] = pandas2ri.ri2py_dataframe(val)
elif isinstance(val, robjects.vectors.ListVector):
converted[name] = r_list_to_py_dict(val)
elif isinstance(val, robjects.vectors.FloatVector) or isinstance(val, robjects.vectors.StrVector):
if len(val) == 1:
converted[name] = val[0]
else:
converted[name] = list(val)
else: # single value
converted[name] = val
return converted
#R to python data structure
py_classifier = r_list_to_py_dict(classifier)
display(py_classifier) #TODO: Check on how the genes are paired
#Unravel python data structure to list
list_pairs = list(py_classifier.values())
display(list_pairs)
#Unravel python list
type(list_pairs)
display(list_pairs[0])
#Unravel and cast python list to panda dataFrame
df = pd.DataFrame(list_pairs[0])
display(df.columns)
#Unravel panda dataFrame
display(df['age_20_40'].TSPs)
#This method unravels the kTSP classifier and gets all the representative gene pairs
#Notice that still need to read the source code of switchBox and multiclassPairs to
#ensure that the pairs are parsed in the correct way
#Each gene pair is stored in a tuple
#All the tuples are stored in a list
def get_pairs (classifier):
#Unravel the R classifier into a python data structure
py_classifier = r_list_to_py_dict(classifier)
list_pairs = list(py_classifier.values())
df = pd.DataFrame(list_pairs[0])
pairs = []
for age_class in df.columns:
kTSP = df[age_class].TSPs
i = 0
while (i < len(kTSP)):
pair = (kTSP[i], kTSP[i+1])
pairs.append(pair)
i += 2
#Remove possible duplicates and cast back to list
pairs = list(set(pairs))
return pairs
#Get the gene pairs
pairs = get_pairs(classifier)
display(pairs)
display(len(pairs))
#This method gets the names of the gene pairs
def get_pairs_name(pairs):
names = []
for pair in pairs:
name = ">".join([str(pair[0]), str(pair[1])])
names.append(name)
return names
#Get names of all the pairs
pair_names = get_pairs_name(pairs)
display(pair_names)
#Generate a dummy data frame that contains 0 for all entries
#The columns are the names of each gene pair
dummy_binary = [0] * len(U_data)
df_bi = {}
for pair in pair_names:
df_bi[pair] = dummy_binary
binary_xdata = pd.DataFrame(df_bi)
display(binary_xdata)
#For each person compare the expression level of the genes in each gene pair
#If geneA>geneB of a person as specified in the column, 1 is put into the according entry
#Iterate through each person
for i in range(0, len(U_data)):
#Iterate through each pair for each person
for j in range(0, len(pairs)):
#Get gene1 and gene2
gene1 = pairs[j][0]
gene2 = pairs[j][1]
#Compare expression level of gene1 and gene2
if U_data.iloc[i][gene1] > U_data.iloc[i][gene2]:
binary_xdata.at[i, pair_names[j]] = 1
display(binary_xdata)
#Fiddle with the indices to align with the requirements for csv file in ensemble
cols = list(binary_xdata.columns)
age = U_data.index.get_level_values('age').values
uid = U_data.index.get_level_values('uid').values
meta = U_data.index.get_level_values('meta').values
binary_xdata['age'] = age
binary_xdata['uid'] = uid
binary_xdata['meta'] = meta
binary_xdata = binary_xdata.reindex(columns=(['uid','age','meta'] + cols))
display(binary_xdata)
binary_xdata.reset_index()
binary_xdata.set_index(['uid'])
#Export the csv file containing the binary matrix
binary_xdata.to_csv('binary_gene_labels.csv', index=False)
```
# Part 3: kTSP classifier with filter method feature selection
This segment of code will use kTSP classifier to predict on the reserved 100 samples from the last part to serve as the baseline comparision.
```
display(classifier)
#Try removing indices, which are causing type errors
raw_ktsp_results = r['predict_one_vs_rest_TSP'](classifier = classifier, Data = U_data.transpose(),
tolerate_missed_genes = False, weighted_votes = True)
%debug
#Visualize kTSP baseline performance
ktsp_results = get_age(raw_ktsp_results)
true_class = get_labels(U_data)
plot_confusion_matrix(true_class, ktsp_results)
```
# Note:
Part 2 and part 3 are subject to high variability because of the random sampling process in filter method. The features chosen in part 2 are not always reproducible. Part 3 is subject to unstable runtime errors that do not occur every time because of the unstable selection of features in part 2. Some features will cause runtime error, but others will not.
# Future plan:
To counter the variability in part 2 and 3, following plan will be implemented in the futre to join the two part together:
for i in range(k):
shuffle data
train-test split
calc features on train
calc performance of kTSP on test set
3x10 cross validate ensemble classifier on test
for the future save everything:
the feature set, the kTSP performance, the ensemble votes
and the ensemble performance (R2, MAE, MED)
| github_jupyter |
# `layers.loss`
```
%reload_ext autoreload
%autoreload 2
# %load ../../HPA-competition-solutions/bestfitting/src/layers/loss.py
#default_exp layers.loss
#export
import math
from torch import nn
import torch.nn.functional as F
from kgl_humanprotein.config.config import *
from kgl_humanprotein.layers.hard_example import *
from kgl_humanprotein.layers.lovasz_losses import *
#export
class FocalLoss(nn.Module):
def __init__(self, gamma=2):
super().__init__()
self.gamma = gamma
def forward(self, logit, target, epoch=0):
target = target.float()
max_val = (-logit).clamp(min=0)
loss = logit - logit * target + max_val + \
((-max_val).exp() + (-logit - max_val).exp()).log()
invprobs = F.logsigmoid(-logit * (target * 2.0 - 1.0))
loss = (invprobs * self.gamma).exp() * loss
if len(loss.size())==2:
loss = loss.sum(dim=1)
return loss.mean()
class HardLogLoss(nn.Module):
def __init__(self):
super(HardLogLoss, self).__init__()
self.bce_loss = nn.BCEWithLogitsLoss()
self.__classes_num = NUM_CLASSES
def forward(self, logits, labels,epoch=0):
labels = labels.float()
loss=0
for i in range(NUM_CLASSES):
logit_ac=logits[:,i]
label_ac=labels[:,i]
logit_ac, label_ac=get_hard_samples(logit_ac,label_ac)
loss+=self.bce_loss(logit_ac,label_ac)
loss = loss/NUM_CLASSES
return loss
# https://github.com/bermanmaxim/LovaszSoftmax/tree/master/pytorch
def lovasz_hinge(logits, labels, ignore=None, per_class=True):
"""
Binary Lovasz hinge loss
logits: [B, C] Variable, logits at each pixel (between -\infty and +\infty)
labels: [B, C] Tensor, binary ground truth masks (0 or 1)
per_image: compute the loss per image instead of per batch
ignore: void class id
"""
if per_class:
loss = 0
for i in range(NUM_CLASSES):
logit_ac = logits[:, i]
label_ac = labels[:, i]
loss += lovasz_hinge_flat(logit_ac, label_ac)
loss = loss / NUM_CLASSES
else:
logits = logits.view(-1)
labels = labels.view(-1)
loss = lovasz_hinge_flat(logits, labels)
return loss
# https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/69053
class SymmetricLovaszLoss(nn.Module):
def __init__(self):
super(SymmetricLovaszLoss, self).__init__()
self.__classes_num = NUM_CLASSES
def forward(self, logits, labels,epoch=0):
labels = labels.float()
loss=((lovasz_hinge(logits, labels)) + (lovasz_hinge(-logits, 1 - labels))) / 2
return loss
class FocalSymmetricLovaszHardLogLoss(nn.Module):
def __init__(self):
super(FocalSymmetricLovaszHardLogLoss, self).__init__()
self.focal_loss = FocalLoss()
self.slov_loss = SymmetricLovaszLoss()
self.log_loss = HardLogLoss()
def forward(self, logit, labels,epoch=0):
labels = labels.float()
focal_loss = self.focal_loss.forward(logit, labels, epoch)
slov_loss = self.slov_loss.forward(logit, labels, epoch)
log_loss = self.log_loss.forward(logit, labels, epoch)
loss = focal_loss*0.5 + slov_loss*0.5 +log_loss * 0.5
return loss
# https://github.com/ronghuaiyang/arcface-pytorch
class ArcFaceLoss(nn.modules.Module):
def __init__(self,s=30.0,m=0.5):
super(ArcFaceLoss, self).__init__()
self.classify_loss = nn.CrossEntropyLoss()
self.s = s
self.easy_margin = False
self.cos_m = math.cos(m)
self.sin_m = math.sin(m)
self.th = math.cos(math.pi - m)
self.mm = math.sin(math.pi - m) * m
def forward(self, logits, labels, epoch=0):
cosine = logits
sine = torch.sqrt(1.0 - torch.pow(cosine, 2))
phi = cosine * self.cos_m - sine * self.sin_m
if self.easy_margin:
phi = torch.where(cosine > 0, phi, cosine)
else:
phi = torch.where(cosine > self.th, phi, cosine - self.mm)
one_hot = torch.zeros(cosine.size(), device='cuda')
one_hot.scatter_(1, labels.view(-1, 1).long(), 1)
# -------------torch.where(out_i = {x_i if condition_i else y_i) -------------
output = (one_hot * phi) + ((1.0 - one_hot) * cosine)
output *= self.s
loss1 = self.classify_loss(output, labels)
loss2 = self.classify_loss(cosine, labels)
gamma=1
loss=(loss1+gamma*loss2)/(1+gamma)
return loss
```
| github_jupyter |
## Load Library And Data
```
# importing the library
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# to know the ecoding type
import chardet
with open('E:\\Recommendation System\\book.csv', 'rb') as rawdata:
result = chardet.detect(rawdata.read(100000))
result
```
- The encoding standard used in the input file is ISO-8859-1
- Hence, to minimize the error while loading the input data, we are passing this encoding standard
```
# load the dataset 1
books_data = pd.read_csv('E:\\1_ExcelR_data\\0_assignmentsData\\10_Recommendation System\\book.csv', encoding='ISO-8859-1')
books_data
```
## Data Cleaning And EDA
```
# drop unnecessary column
books_data.drop(['Unnamed: 0'], axis = 1, inplace=True)
books_data.head()
books_data.sort_values(by=['User.ID'])
# data dimenssion
books_data.shape
# data description
books_data.describe().T
# dataframes types
books_data.dtypes
# informartion of the data
books_data.info()
```
- No null values
- two features are numeric
- one feature is categorical
```
books_data.describe()['Book.Rating']
```
- max rating = 10
- min rating = 1
- average rating = 7.5
```
# find the minimum and maximum ratings
print('Minimum rating is:', (books_data['Book.Rating'].min()))
print('Maximum rating is:', (books_data['Book.Rating'].max()))
```
- Most of the books are getting max ratings as 8
- Minimum ratings as 1 are very few books
```
# Unique Users and ratings
print("Total data \n")
print("Total no of ratings :",books_data.shape[0])
print("Total No of Users :", len(np.unique(books_data['User.ID'])))
print("Total No of products :", len(np.unique(books_data['Book.Rating'])))
# find out the average rating for each and every books
Average_ratings = pd.DataFrame(books_data.groupby('Book.Title')['Book.Rating'].mean())
Average_ratings.head(3)
```
- Average ratings received by readers as,
1) 8.0 - Jason, Madison &
2) 6.0 - Other Stories;Merril;1985;McClelland &
3) 4.0 - Repairing PC Drives &
## Visualize The Data
```
# Check the distribution of the rating
plt.figure(figsize=(10, 5))
sns.countplot("Book.Rating", data = books_data)
plt.title('Rating distrubutions', fontsize = 20)
plt.xlabel("Book ratings", fontsize = 15)
plt.ylabel("Total counts", fontsize = 15)
plt.show()
```
## Building The Recommender
```
# make pivot table
book_users = books_data.pivot_table( index='User.ID', columns = books_data['Book.Title'], values='Book.Rating')
book_users
# find correlation between "10 Commandments Of Dating" and other books
book_read = book_users["10 Commandments Of Dating"]
similarity_with_other_books = book_users.corrwith(book_read)
similarity_with_other_books = similarity_with_other_books.sort_values(ascending=False)
similarity_with_other_books.head(10)
# imputer NaN with 0
book_users.fillna(0, inplace = True)
book_users
# collecting unique user id
book_users.index = books_data['User.ID'].unique()
book_users.head()
```
## Computation with Cosine Distance
```
# calculating Cosine Similarities between Users
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine, correlation
# Cosine similarities values (using distance matrics)
user_sim = 1 - pairwise_distances(book_users.values, metric = 'cosine')
user_sim
# store the result (Cosine Similarities values) in a dataframe
user_similarity_df = pd.DataFrame(user_sim)
user_similarity_df
# set the index and columns to userId
user_similarity_df.index = books_data['User.ID'].unique()
user_similarity_df.columns = books_data['User.ID'].unique()
books_data
user_similarity_df.iloc[0:7, 0:7]
np.fill_diagonal(user_sim, 0)
user_similarity_df.iloc[0:7, 0:7]
```
## Most Similarity
```
# Most similar readers
user_similarity_df.idxmax(axis = 1)[0:20]
# find out book read by two users 276780 and 276726
books_data[(books_data['User.ID'] == 276780) | (books_data['User.ID'] == 276726)]
# user 276780 books
user_276780 = books_data[books_data['User.ID'] == 276780]
user_276780
# user 276726 books
user_276726 = books_data[books_data['User.ID'] == 276726]
user_276726
```
## Recommendations
```
# meging two user book data into single one
pd.merge(user_276780, user_276726, on = 'Book.Title', how = 'outer')
```
- User __176780__ read two books titled __'Wild Animus'__ and __'Airframe'__ which is rated as __7.0__
- User __276726__ read only one book titled __'Classical Mythology'__ which is rated as __5.0__
- So based on ratings given by readers, _the book 'Classical Mythology' is recommended to User 176780 and the books 'Wild Animus' and 'Airframe' are recommended to User 276726_
| github_jupyter |
# Example Feature-Based Cluster Queries
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import h5py
import math
import numpy as np
chrom = 'chr7'
bin_size = 100000
cluster_tsv_file = '../data/hg19/gm12878_triplets_chr7_100kb_pooled.tsv.gz'
cluster_h5_file = '../data/hg19/gm12878_triplets_chr7_100kb_pooled.h5'
chrom_sizes_file = '../data/hg19/hg19.chrom.sizes'
tads_arrowhead_bed_file = '../data/hg19/Rao_RepH_GM12878_Arrowhead.sorted.bed'
tads_arrowhead_sqlite_file = '../data/hg19/Rao_RepH_GM12878_Arrowhead.sorted.sqlite'
chromhmm_bed_file = '../data/hg19/wgEncodeBroadHmmGm12878HMM.bed.gz'
chromhmm_sqlite_file = '../data/hg19/wgEncodeBroadHmmGm12878HMM.sqlite'
subcompartments_bed_file = '../data/hg19/GSE63525_GM12878_subcompartments.bed.gz'
subcompartments_sqlite_file = '../data/hg19/GSE63525_GM12878_subcompartments.sqlite'
loop_extents_bed_file = '../data/hg19/GSE63525_GM12878_replicate_HiCCUPS_loop_extent_list.bed.gz'
loop_extents_sqlite_file = '../data/hg19/GSE63525_GM12878_replicate_HiCCUPS_loop_extent_list.sqlite'
from hgmc.utils import get_chrom_sizes
chrom_size = get_chrom_sizes(chrom_sizes_file).get(chrom)
num_bins = math.ceil(chrom_size / bin_size)
```
# Load Features
```
from hgmc.bed import sql_features
from utils import natural_sort
chromhmm_features = natural_sort(sql_features(chromhmm_sqlite_file))
chromhmm_features
from hgmc.bed import sql_coverage
tad_coverage_100kb = sql_coverage(
tads_arrowhead_sqlite_file,
chrom=chrom,
bin_size=bin_size,
# At least 80% of the TAD needs to be in the bin to count
count_at_feat_cov=0.8,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{tad_coverage_100kb.astype(bool).sum()} bins contain TADs with at most {tad_coverage_100kb.max()} TADs per bin')
active_promoter_coverage_100kb = sql_coverage(
chromhmm_sqlite_file,
chrom=chrom,
bin_size=bin_size,
features='1_Active_Promoter',
# The entire promoter needs to be in the bin to count
count_at_feat_cov=1.0,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{active_promoter_coverage_100kb.astype(bool).sum()} bins contain active promoters with at most {active_promoter_coverage_100kb.max()} active promoters per bin')
strong_enhancer_coverage_100kb = sql_coverage(
chromhmm_sqlite_file,
chrom=chrom,
bin_size=bin_size,
features=['4_Strong_Enhancer', '5_Strong_Enhancer'],
# The entire enhancer needs to be in the bin to count
count_at_feat_cov=1.0,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{strong_enhancer_coverage_100kb.astype(bool).sum()} bins contain enhancers with at most {strong_enhancer_coverage_100kb.max()} enhancers per bin')
a_compartment_coverage_100kb = sql_coverage(
subcompartments_sqlite_file,
chrom=chrom,
bin_size=bin_size,
features=['A1', 'A2'],
# At least 80% of the bin need to be an A compartment to count
count_at_bin_cov=0.8,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{a_compartment_coverage_100kb.astype(bool).sum()} bins are A compartment')
b_compartment_coverage_100kb = sql_coverage(
subcompartments_sqlite_file,
chrom=chrom,
bin_size=bin_size,
features=['B1', 'B2', 'B3', 'B4'],
# At least 80% of the bin need to be an A compartment to count
count_at_bin_cov=0.8,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{b_compartment_coverage_100kb.astype(bool).sum()} bins are B compartment')
loop_extent_coverage_100kb = sql_coverage(
loop_extents_sqlite_file,
chrom=chrom,
bin_size=bin_size,
# Only count if the entire loop extent is in the bin
count_at_feat_cov=1.0,
rel_count_at_bin_cov=True,
timeit=True
)
print(f'{loop_extent_coverage_100kb.astype(bool).sum()} bins contain loops with at most {loop_extent_coverage_100kb.max()} loops per bin')
all_features = [
('TADs', tad_coverage_100kb),
('Active Promoters', active_promoter_coverage_100kb),
('Strong Enhancers', strong_enhancer_coverage_100kb),
('A Compartments', a_compartment_coverage_100kb),
('B Compartments', b_compartment_coverage_100kb),
('Loops', loop_extent_coverage_100kb),
]
```
# Queries
## Find all triplets that span A-only compartments
```
from hgmc.clusters import clusters_to_bins, query_by_features, verify_queried_clusters
from hgmc.plots import plot_cluster_feature_distribution
with h5py.File(cluster_h5_file, 'r') as h5:
#####
query = [(a_compartment_coverage_100kb.astype(bool), 3)]
#####
a_cluster_ids = query_by_features(h5, query, verbose=True, verify=True, timeit=True)
print(f'Found {ab_cluster_ids.size} clusters')
with h5py.File(cluster_h5_file, 'r') as h5:
a_cluster_bins = clusters_to_bins(h5, a_cluster_ids)
unique_a_cluster_bins = np.unique(a_cluster_bins[:, 1])
print(f'Found {a_cluster_bins.shape[0]} bins ({unique_a_cluster_bins.size} bins)')
with h5py.File(cluster_h5_file, 'r') as h5:
plot_cluster_feature_distribution(h5, a_cluster_bins, all_features, figsize=(12, 6))
```
## Find all triplets that span A&B compartments
```
with h5py.File(cluster_h5_file, 'r') as h5:
#####
query = [(a_compartment_coverage_100kb.astype(bool), 1), (b_compartment_coverage_100kb.astype(bool), 1)]
#####
ab_cluster_ids = query_by_features(h5, query, verbose=True, verify=True)
print(f'Found {ab_cluster_ids.size} clusters')
with h5py.File(cluster_h5_file, 'r') as h5:
ab_cluster_bins = clusters_to_bins(h5, ab_cluster_ids)
unique_ab_cluster_bins = np.unique(ab_cluster_bins[:, 1])
print(f'Found {ab_cluster_bins.shape[0]} bins ({unique_ab_cluster_bins.size} bins)')
with h5py.File(cluster_h5_file, 'r') as h5:
plot_cluster_feature_distribution(h5, ab_cluster_bins, all_features, figsize=(12, 6))
```
## Find all triplets that anchor in 3 peaks/loops
```
print(f'There are {loop_extent_coverage_100kb.astype(bool).sum()} bins with peaks')
with h5py.File(cluster_h5_file, 'r') as h5:
#####
query = [(loop_extent_coverage_100kb.astype(bool), 3)]
#####
loop_cluster_ids = query_by_features(h5, query, verbose=True, verify=True)
print(f'Found {loop_cluster_ids.size} clusters')
with h5py.File(cluster_h5_file, 'r') as h5:
loop_cluster_bins = clusters_to_bins(h5, loop_cluster_ids)
unique_loop_cluster_bins = np.unique(loop_cluster_bins[:, 1])
print(f'Found {loop_cluster_bins.shape[0]} bins ({unique_loop_cluster_bins.size} bins)')
with h5py.File(cluster_h5_file, 'r') as h5:
plot_cluster_feature_distribution(h5, loop_cluster_bins, all_features, figsize=(12,6))
```
## Find all triplets that anchor in 3 TADs
```
print(f'There are {tad_coverage_100kb.astype(bool).sum()} bins with TADs')
with h5py.File(cluster_h5_file, 'r') as h5:
#####
query = [(tad_coverage_100kb.astype(bool), 3)]
#####
tad_cluster_ids = query_by_features(h5, query, verbose=True, verify=True)
print(f'Found {tad_cluster_ids.size} clusters')
with h5py.File(cluster_h5_file, 'r') as h5:
tad_cluster_bins = clusters_to_bins(h5, tad_cluster_ids)
unique_tad_cluster_bins = np.unique(tad_cluster_bins[:, 1])
print(f'Found {tad_cluster_bins.shape[0]} bins ({unique_tad_cluster_bins.size} bins)')
with h5py.File(cluster_h5_file, 'r') as h5:
plot_cluster_feature_distribution(h5, tad_cluster_bins, all_features, figsize=(12,6))
```
## Find all triplets that anchor in at least 1 promoter and at least 2 active enhancers
```
with h5py.File(cluster_h5_file, 'r') as h5:
query = [
(active_promoter_coverage_100kb.astype(bool), 1),
(strong_enhancer_coverage_100kb.astype(bool), 2)
]
promoter_enhancer_cluster_ids = query_by_features(h5, query, verbose=True, verify=True)
print(f'Found {promoter_enhancer_cluster_ids.size} clusters')
with h5py.File(cluster_h5_file, 'r') as h5:
promoter_enhancer_cluster_bins = clusters_to_bins(h5, promoter_enhancer_cluster_ids)
unique_promoter_enhancer_cluster_bins = np.unique(promoter_enhancer_cluster_bins[:, 1])
print(f'Found {promoter_enhancer_cluster_bins.shape[0]} bins ({unique_promoter_enhancer_cluster_bins.size} bins)')
with h5py.File(cluster_h5_file, 'r') as h5:
plot_cluster_feature_distribution(h5, promoter_enhancer_cluster_bins, all_features, figsize=(12,6))
```
| github_jupyter |
## Eng+Wales well-mixed example model
This is the inference notebook with increased inference window. There are various model variants as encoded by `expt_params_local` and `model_local`, which are shared by the notebooks in a given directory.
Outputs of this notebook:
(same as `inf` notebook with added `tWin` label in filename)
NOTE carefully : `Im` compartment is cumulative deaths, this is called `D` elsewhere
### Start notebook
(the following line is for efficient parallel processing)
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
import pandas as pd
import matplotlib.image as mpimg
import pickle
import os
import pprint
import scipy.stats
# comment these before commit
#print(pyross.__file__)
#print(os.getcwd())
from ew_fns import *
import expt_params_local
import model_local
```
### switches etc
```
verboseMod=False ## print ancillary info about the model? (would usually be False, for brevity)
## Calculate things, or load from files ?
doInf = False ## do inference, or load it ?
doHes = True ## Hessian may take a few minutes !! does this get removed? what to do?
## time unit is one week
daysPerWeek = 7.0
## these are params that might be varied in different expts
exptParams = expt_params_local.getLocalParams()
## over-ride params for inference window
exptParams['timeLast'] = 11
exptParams['forecastTime'] = 11-exptParams['timeLast']
exptParams['pikFileRoot'] += '-tWin11'
pprint.pprint(exptParams)
## this is used for filename handling throughout
pikFileRoot = exptParams['pikFileRoot']
```
### convenient settings
```
np.set_printoptions(precision=3)
pltAuto = True
plt.rcParams.update({'figure.autolayout': pltAuto})
plt.rcParams.update({'font.size': 14})
```
## LOAD MODEL
```
loadModel = model_local.loadModel(exptParams,daysPerWeek,verboseMod)
## should use a dictionary but...
[ numCohorts, fi, N, Ni, model_spec, estimator, contactBasis, interventionFn,
modParams, priorsAll, initPriorsLinMode, obsDeath, fltrDeath,
simTime, deathCumulativeDat ] = loadModel
```
### Inspect most likely trajectory for model with prior mean params
```
x0_lin = estimator.get_mean_inits(initPriorsLinMode, obsDeath[0], fltrDeath)
guessTraj = estimator.integrate( x0_lin, exptParams['timeZero'], simTime, simTime+1)
## plots
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(guessTraj[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.show() ; plt.close()
indClass = model_spec['classes'].index('Im')
plt.yscale('log')
for coh in range(numCohorts):
plt.plot( N*guessTraj[:,coh+indClass*numCohorts],label='m{c:d}'.format(c=coh) )
plt.xlabel('time in weeks')
plt.ylabel('cumul deaths by age cohort')
plt.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
```
## INFERENCE
parameter count
* 32 for age-dependent Ai and Af (or beta and Af)
* 2 (step-like) or 3 (NPI-with-easing) for lockdown time and width (+easing param)
* 1 for projection of initial condition along mode
* 5 for initial condition in oldest cohort
* 5 for the gammas
* 1 for beta in late stage
total: 46 (step-like) or 47 (with-easing)
The following computation with CMA-ES takes some minutes depending on compute power, it should use multiple CPUs efficiently, if available. The result will vary (slightly) according to the random seed, can be controlled by passing `cma_random_seed` to `latent_infer`
```
def runInf() :
infResult = estimator.latent_infer(obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
verbose=True,
enable_global=True,
enable_local =True,
**exptParams['infOptions'],
)
return infResult
if doInf:
## do the computation
elapsedInf = time.time()
infResult = runInf()
elapsedInf = time.time() - elapsedInf
print('** elapsed time',elapsedInf/60.0,'mins')
# save the answer
opFile = pikFileRoot + "-inf.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,elapsedInf],f)
else:
## load a saved computation
print(' Load data')
# here we load the data
# (this may be the file that we just saved, it is deliberately outside the if: else:)
ipFile = pikFileRoot + "-inf.pik"
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
[infResult,elapsedInf] = pickle.load(f)
```
#### unpack results
```
epiParamsMAP = infResult['params_dict']
conParamsMAP = infResult['control_params_dict']
x0_MAP = infResult['x0']
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
logPinf = -estimator.minus_logp_red(epiParamsMAP, x0_MAP, obsDeath, fltrDeath, simTime,
CM_MAP, tangent=False)
print('** measuredLikelihood',logPinf)
print('** logPosterior ',infResult['log_posterior'])
print('** logLikelihood',infResult['log_likelihood'])
```
#### MAP dominant trajectory
```
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
trajMAP = estimator.integrate( x0_MAP, exptParams['timeZero'], simTime, simTime+1)
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(trajMAP[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
fig,axs = plt.subplots(1,2,figsize=(10,4.5))
cohRanges = [ [x,x+4] for x in range(0,75,5) ]
#print(cohRanges)
cohLabs = ["{l:d}-{u:d}".format(l=low,u=up) for [low,up] in cohRanges ]
cohLabs.append("75+")
ax = axs[0]
ax.set_title('MAP (average dynamics)')
mSize = 3
minY = 0.12
maxY = 1.0
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
ax.set_ylabel('cumulative M (by cohort)')
ax.set_xlabel('time/weeks')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*trajMAP[:,coh+indClass*numCohorts],'o-',label=cohLabs[coh],ms=mSize )
maxY = np.maximum( maxY, np.max(N*trajMAP[:,coh+indClass*numCohorts]))
#ax.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
maxY *= 1.6
ax.set_ylim(bottom=minY,top=maxY)
#plt.show() ; plt.close()
ax = axs[1]
ax.set_title('data')
ax.set_xlabel('time/weeks')
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*obsDeath[:,coh],'o-',label=cohLabs[coh],ms=mSize )
## keep the same as other panel
ax.set_ylim(bottom=minY,top=maxY)
ax.legend(fontsize=10,bbox_to_anchor=(1, 1.0))
#plt.show() ; plt.close()
#plt.savefig('ageMAPandData.png')
plt.show(fig)
```
#### sanity check : plot the prior and inf value for one or two params
```
(likFun,priFun,dim) = pyross.evidence.latent_get_parameters(estimator,
obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
)
def showInfPrior(xLab) :
fig = plt.figure(figsize=(4,4))
dimFlat = np.size(infResult['flat_params'])
## magic to work out the index of this param in flat_params
jj = infResult['param_keys'].index(xLab)
xInd = infResult['param_guess_range'][jj]
## get the range
xVals = np.linspace( *priorsAll[xLab]['bounds'], 100 )
#print(infResult['flat_params'][xInd])
pVals = []
checkVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
#checkVals.append( scipy.stats.norm.pdf(xx,loc=0.2,scale=0.1) )
plt.plot(xVals,pVals,'-',label='prior')
infVal = infResult['flat_params'][xInd]
infPdf = np.exp( priFun.logpdf(infResult['flat_params']) )[xInd]
plt.plot([infVal],[infPdf],'ro',label='inf')
plt.xlabel(xLab)
upperLim = 1.05*np.max(pVals)
plt.ylim(0,upperLim)
#plt.plot(xVals,checkVals)
plt.legend()
plt.show(fig) ; plt.close()
#print('**params\n',infResult['flat_params'])
#print('**logPrior\n',priFun.logpdf(infResult['flat_params']))
showInfPrior('gammaE')
```
## Hessian matrix of log-posterior
(this can take a few minutes, it does not make use of multiple cores)
```
if doHes:
## this eps amounts to a perturbation of approx 1% on each param
## (1/4) power of machine epsilon is standard for second deriv
xx = infResult['flat_params']
eps = 100 * xx*( np.spacing(xx)/xx )**(0.25)
#print('**params\n',infResult['flat_params'])
#print('** rel eps\n',eps/infResult['flat_params'])
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
start = time.time()
hessian = estimator.latent_hessian(obs=obsDeath, fltr=fltrDeath,
Tf=simTime, generator=contactBasis,
infer_result=infResult,
intervention_fun=interventionFn,
eps=eps, tangent=False, fd_method="central",
inter_steps=0)
end = time.time()
print('time',(end-start)/60,'mins')
opFile = pikFileRoot + "-hess.npy"
print('opf',opFile)
with open(opFile, 'wb') as f:
np.save(f,hessian)
else :
print('Load hessian')
# reload in all cases (even if we just saved it)
ipFile = pikFileRoot + "-hess.npy"
try:
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
hessian = np.load(f)
except (OSError, IOError) :
print('... error loading hessian')
hessian = None
#print(hessian)
print("** param vals")
print(infResult['flat_params'],'\n')
if np.all(hessian) != None :
print("** naive uncertainty v1 : reciprocal sqrt diagonal elements (x2)")
print( 2/np.sqrt(np.diagonal(hessian)) ,'\n')
print("** naive uncertainty v2 : sqrt diagonal elements of inverse (x2)")
print( 2*np.sqrt(np.diagonal(np.linalg.inv(hessian))) ,'\n')
```
| github_jupyter |
```
#Use this command to run it on floydhub: floyd run --gpu --env tensorflow-1.4 --data emilwallner/datasets/imagetocode/2:data --data emilwallner/datasets/html_models/1:weights --mode jupyter
from os import listdir
from numpy import array
from keras.preprocessing.text import Tokenizer, one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.utils import to_categorical
from keras.layers import Embedding, TimeDistributed, RepeatVector, LSTM, concatenate , Input, Reshape, Dense, Flatten
from keras.preprocessing.image import array_to_img, img_to_array, load_img
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
import numpy as np
# Load the images and preprocess them for inception-resnet
images = []
all_filenames = listdir('resources/images/')
all_filenames.sort()
for filename in all_filenames:
images.append(img_to_array(load_img('resources/images/'+filename, target_size=(299, 299))))
images = np.array(images, dtype=float)
images = preprocess_input(images)
# Run the images through inception-resnet and extract the features without the classification layer
IR2 = InceptionResNetV2(weights=None, include_top=False, pooling='avg')
IR2.load_weights('/data/models/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5')
features = IR2.predict(images)
# We will cap each input sequence to 100 tokens
max_caption_len = 100
# Initialize the function that will create our vocabulary
tokenizer = Tokenizer(filters='', split=" ", lower=False)
# Read a document and return a string
def load_doc(filename):
file = open(filename, 'r')
text = file.read()
file.close()
return text
# Load all the HTML files
X = []
all_filenames = listdir('resources/html/')
all_filenames.sort()
for filename in all_filenames:
X.append(load_doc('resources/html/'+filename))
# Create the vocabulary from the html files
tokenizer.fit_on_texts(X)
# Add +1 to leave space for empty words
vocab_size = len(tokenizer.word_index) + 1
# Translate each word in text file to the matching vocabulary index
sequences = tokenizer.texts_to_sequences(X)
# The longest HTML file
max_length = max(len(s) for s in sequences)
# Intialize our final input to the model
X, y, image_data = list(), list(), list()
for img_no, seq in enumerate(sequences):
for i in range(1, len(seq)):
# Add the entire sequence to the input and only keep the next word for the output
in_seq, out_seq = seq[:i], seq[i]
# If the sentence is shorter than max_length, fill it up with empty words
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# Map the output to one-hot encoding
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
# Add and image corresponding to the HTML file
image_data.append(features[img_no])
# Cut the input sentence to 100 tokens, and add it to the input data
X.append(in_seq[-100:])
y.append(out_seq)
X, y, image_data = np.array(X), np.array(y), np.array(image_data)
# Create the encoder
image_features = Input(shape=(1536,))
image_flat = Dense(128, activation='relu')(image_features)
ir2_out = RepeatVector(max_caption_len)(image_flat)
# Create the decoder
language_input = Input(shape=(max_caption_len,))
language_model = Embedding(vocab_size, 200, input_length=max_caption_len)(language_input)
language_model = LSTM(256, return_sequences=True)(language_model)
language_model = LSTM(256, return_sequences=True)(language_model)
language_model = TimeDistributed(Dense(128, activation='relu'))(language_model)
# Create the decoder
decoder = concatenate([ir2_out, language_model])
decoder = LSTM(512, return_sequences=True)(decoder)
decoder = LSTM(512, return_sequences=False)(decoder)
decoder_output = Dense(vocab_size, activation='softmax')(decoder)
# Compile the model
model = Model(inputs=[image_features, language_input], outputs=decoder_output)
#model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.load_weights("/weights/org-weights-epoch-0900---loss-0.0000.hdf5")
# Train the neural network
#model.fit([image_data, X], y, batch_size=64, shuffle=False, epochs=2)
# map an integer to a word
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# generate a description for an image
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'START'
# iterate over the whole length of the sequence
for i in range(900):
# integer encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0][-100:]
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
# predict next word
yhat = model.predict([photo,sequence], verbose=0)
# convert probability to integer
yhat = np.argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# Print the prediction
print(' ' + word, end='')
# stop if we predict the end of the sequence
if word == 'END':
break
return
# Load and image, preprocess it for IR2, extract features and generate the HTML
test_image = img_to_array(load_img('resources/images/86.jpg', target_size=(299, 299)))
test_image = np.array(test_image, dtype=float)
test_image = preprocess_input(test_image)
test_features = IR2.predict(np.array([test_image]))
generate_desc(model, tokenizer, np.array(test_features), 100)
```
| github_jupyter |
# import required library
```
# Import numpy, pandas for data manipulation
import numpy as np
import pandas as pd
# Import matplotlib, seaborn for visualization
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# Import the data
weather_data = pd.read_csv('weather.csv')
weather_data.head()
rain_df = weather_data[['Date','Rainfall']]
rain_df.head()
rain_df.shape
rain_df.info()
```
**Using 50 values**
```
rain_df = rain_df.loc[:49]
rain_df.head()
rain_df.shape
# Convert the time column into datetime
rain_df['Date'] = pd.to_datetime(rain_df['Date'])
rain_df['Date'].head()
rain_df.info()
# fill the empty row
rain_df = rain_df.fillna(rain_df['Rainfall'].mean())
rain_df.head()
```
### Dataset Explanation
```
rain_df.describe()
# Output the maximum and minimum rain date
print(rain_df.loc[rain_df["Rainfall"] == rain_df["Rainfall"].max()])
print(rain_df.loc[rain_df["Rainfall"] == rain_df["Rainfall"].min()])
# Reset the index
rain_df.set_index("Date", inplace=True)
```
### Data Visualization
```
# Plot the daily temperature change
plt.figure(figsize=(16,10), dpi=100)
plt.plot(rain_df.index, rain_df.Rainfall, color='tab:red')
plt.gca().set(title="Daily Rain", xlabel='Date', ylabel="rain value")
plt.show()
# Apply the Moving Average function by a subset of size 10 days.
rain_df_mean = rain_df.Rainfall.rolling(window=10).mean()
rain_df_mean.plot(figsize=(16,10))
plt.show()
from statsmodels.tsa.seasonal import seasonal_decompose
# Additive Decomposition
result_add = seasonal_decompose(rain_df.Rainfall, model='additive', extrapolate_trend=0)
# Plot
plt.rcParams.update({'figure.figsize': (10,10)})
result_add.plot().suptitle('Additive Decomposition', fontsize=22)
plt.show()
```
### Baseline Model
```
# Shift the current rain to the next day.
predicted_df = rain_df["Rainfall"].to_frame().shift(1).rename(columns = {"Rainfall": "rain_pred" })
actual_df = rain_df["Rainfall"].to_frame().rename(columns = {"Rainfall": "rain_actual" })
# Concatenate the actual and predicted rain
one_step_df = pd.concat([actual_df,predicted_df],axis=1)
# Select from the second row, because there is no prediction for today due to shifting.
one_step_df = one_step_df[1:]
one_step_df.head(10)
```
> Here you can the we have two column one is our **actual rain** column and othe is **predicted rain** column that we use next model
We could validate how well our model is by looking at the Root Mean Squared Error(RMSE) between the predicted and actual rain
```
from sklearn.metrics import mean_squared_error as MSE
from math import sqrt
# Calculate the RMSE
rain_pred_err = MSE(one_step_df.rain_actual, one_step_df.rain_pred, squared=False)
print("The RMSE is",rain_pred_err)
```
> Our RMSE value is 4.002 is arround 4 that are pretty good for model.
## Using SARIMA model
### Parameter Selection
#### Grid Search
We are going to apply one of the most commonly used method for time-series forecasting, known as SARIMA, which stands for Seasonal Autoregressive Integrated Moving Average. SARIMA models are denoted with the notation SARIMA(p,d,q)(P,D,Q,s). These three parameters account for seasonality, trend, and noise in data:
We will use a “grid search” to iteratively explore different combinations of parameters. For each combination of parameters, we fit a new seasonal SARIMA model with the SARIMAX() function from the statsmodels module and assess its overall quality.
```
import itertools
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(one_step_df.rain_actual,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('SARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
```
### Fitting the Model
```
import warnings
warnings.filterwarnings("ignore") # specify to ignore warning messages
# Import the statsmodels library for using SARIMAX model
import statsmodels.api as sm
# Fit the SARIMAX model using optimal parameters
mod = sm.tsa.statespace.SARIMAX(one_step_df.rain_actual,
order=(1,1,1),
seasonal_order=(1,1,1,12),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
results.summary()
```
**Predictions**
```
pred = results.predict(start=0,end=49)[1:]
pred
pred = results.get_prediction(start=0,end = 49, dynamic=False)
pred_ci = pred.conf_int()
pred_ci.head()
print(pred)
ax = one_step_df.rain_actual.plot(label='observed',figsize=(16,10))
ax.set_xlabel('Date')
ax.set_ylabel('value')
plt.ylim([0,2.0])
plt.legend()
plt.show()
```
### Forecast Diagnostic
It is also useful to quantify the accuracy of our forecasts. We will use the MSE (Mean Squared Error), in which for each predicted value, we compute its distance to the true value and square the result
```
y_forecasted = pred.predicted_mean[:49]
y_truth = one_step_df.rain_actual
print(y_forecasted.shape)
print(y_truth.shape)
# Compute the mean square error
mse = MSE(y_truth, y_forecasted, squared=True)
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
```
Amazziingggg! Our forecast model forecasts the rain with only an error of 25.85.
In the weather forecast field, the prediction error of 2.19 degrees seems promising and sufficient, as there are many other factors that contribute to the change in rain, including but not limited to the wind speed, the air pressure, etc.
### Validating the Dynamic Forecast
In this case, we only use information from the time series up to a certain point, and after that, forecasts are generated using values from previous forecasted time points.
```
pred_dynamic = results.get_prediction(start=0,end = 49, dynamic=True, full_results=True)
pred_dynamic_ci = pred_dynamic.conf_int()
pred_dynamic_ci.head()
```
Once again, we plot the real and forecasted values of the average daily rain to assess how well we did:
```
ax = one_step_df.rain_actual.plot(label='observed', figsize=(15, 11))
pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax)
ax.fill_between(pred_dynamic_ci.index,
pred_dynamic_ci.iloc[:, 0],
pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('Temperature (in Celsius)')
plt.ylim([0,2.0])
plt.legend()
plt.show()
```
> In this case, the model seems to predict the rain inaccurately, with major fluctuations between the true value and the predicted value.
### Forecast Diagnostic
```
# Extract the predicted and true values of our time series
y_forecasted = pred_dynamic.predicted_mean[:49]
y_truth = one_step_df.rain_actual
# Compute the mean square error
mse = sqrt(MSE(y_truth, y_forecasted).mean())
print('The Root Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
```
The **predicted** values obtained from the dynamic forecasts yield an MSE of 3.68. This is significantly higher than the one-step ahead, which is to be expected given that we are relying on less historical data from the time series.
# Conclusion
I described how to implement a seasonal SARIMA model in Python. I made extensive use of the pandas and statsmodels libraries and showed how to run model diagnostics, as well as how to produce forecasts of the Rain.
Recall that in the assumption I made in the section 2.2 Baseline Model, I could even reinforce our assumption and continue our belief that the rainfall today depends on the rainfall yesterday, the rainfall yesterday depends on the day before yesterday, and so on.
It is the best so far to use the history up to the point that we would like to make **predictions** on. Especially it holds for weather forecasting, where the rainfall today does not change much from yesterday, and the transition to another season signaling through the rainfall should gradually occur, unless there is any disastrous factors such as storm, drought, etc.
| github_jupyter |
# Seattle Airbnb
My significant foci are listing and calendar to display data from my business understanding.
* Read dataset - read csv files to pandas dataframe.
* Data manipulation - data cleaning and data wrangling to make quality data to visualization .
* Exploratory data analysis (EDA) - Data visualizations that can answer business questions.
## Section 1 : Business Understanding
Airbnb is a sharing economy platform whether we can get low booking costs as we can. So we can analyse data to explore that meet our questions
1. What is the most occupancy by month in 2016?
2. Which month is the most expensive?
3. Which day is the most costly?
4. Is the host respond to you appropriately?
5. What features were affected the price?
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import calendar
```
## Section 2 : Data Understanding
### Gather data
```
# calendar data
df_calender = pd.read_csv('calendar.csv')
df_calender.head()
# listings data
df_listings = pd.read_csv('listings.csv')
df_listings.head()
# reviews data
df_reviews = pd.read_csv('reviews.csv')
df_reviews.head()
```
#### Calendar data
```
# show number of null values
df_calender.isnull().sum()
# total data in df_calender
len(df_calender)
# check duplicated values in all columns
df_calender[df_calender.duplicated()]
# observ column types in df_calender
df_calender.dtypes
```
## Section 3 : Data Preparation
```
# So we must convert date column type to date
df_calender['date'] = pd.to_datetime(df_calender['date'])
# and convert listing_id type to string
df_calender['listing_id'] = df_calender['listing_id'].astype(str)
d = {'t': 'Available', 'f': 'Not available'}
df_calender["available"] = df_calender["available"].map(d)
# remove the symbols in price
def remove_symbol(price):
"""
Return price in string format
without the dollar sign.
"""
if type(price) is str:
return price.replace("$", "")
return price
df_calender['price'] = df_calender['price'].apply(lambda x: remove_symbol(x))
# convert the price type to numberic
df_calender['price'] = pd.to_numeric(df_calender['price'], errors='coerce')
# replace na value with mean price in each listing_id group, which is usually the same value
df_calender['price'] = df_calender['price'].fillna(df_calender.groupby('listing_id')['price'].transform('mean'))
# drop na value becuase the droped data were not contains an important data in other column
df_calender = df_calender.dropna(subset=['price'])
# recheck null value in price column
df_calender[df_calender['price'].isnull()]
# preview samples in df_calender
df_calender.sample(5)
# create new column contains year value which extracted from datetime
df_calender['year'] = df_calender.date.dt.year
df_calender['year'].value_counts()
## Get data only in 2016 and drop no price
df_calender = df_calender[df_calender['year'] == 2016]
df_calender = df_calender[df_calender['price'] != 0]
```
#### Reviews data
```
# sneak peek dataframe
df_reviews.head()
df_reviews.dtypes
df_reviews['listing_id'] = df_reviews['listing_id'].to_string()
df_reviews['id'] = df_reviews['id'].to_string()
# Check null
df_reviews.isnull().sum()
# drop review_id and reveiew_name
df_reviews = df_reviews.drop(['reviewer_id', 'reviewer_name'], axis=1)
# reomove comments row
df_reviews = df_reviews.dropna(subset=['comments'])
```
#### Listing data
```
# sneak peek dataframe
df_listings.head()
# Select only interest columns
df_listings = df_listings[['id','host_response_time','host_response_rate','accommodates','bathrooms','bedrooms','beds','price','weekly_price','monthly_price'
,'cleaning_fee','extra_people','minimum_nights','review_scores_rating','instant_bookable']]
# convert id type to string
df_listings['id'] = df_listings['id'].to_string()
# replace na values with mode value that which is a most common value and number of beds must be an integer
df_listings['beds'] = df_listings['beds'].fillna(df_listings['beds'].mode()[0])
# remove symbol in price
df_listings['price'] = df_listings['price'].apply(lambda x: remove_symbol(x))
# convert the price type to numberic
df_listings['price'] = pd.to_numeric(df_listings['price'], errors='coerce')
# check null value in price column
df_listings[df_listings['price'].isnull()]
df_listings = df_listings.dropna(subset=['price'])
# covert percentage to numberic in host_response_rate column
def percent_to_numberic(x):
"""
Return percent in floating-point number
without the percent sign.
"""
if isinstance(x, str):
return float(x.strip('%'))/100
return 1
df_listings['host_response_rate'] = df_listings['host_response_rate'].apply(lambda x: percent_to_numberic(x))
```
## Section 4 : Evaluate the Results
### Question 1 : What is the most occupancy by month in 2016 ?
Visualize to find a pattern of the number of available occupation based on month
```
# Set charts size
sns.set(rc={'figure.figsize':(11.7,8.27)})
month = df_calender.date.dt.strftime('%b')
ax = sns.countplot(data = df_calender, x = month, hue = 'available');
ax.set(xlabel='Month', ylabel='Total rooms')
plt.title('Occupation in 2016');
```
January got the highest number for not available rooms. However, December had the most available rooms for booking.
#### Analyzing Occupancy Percentage by using the number of available and unavailable.
```
df_group_month = df_calender.groupby(by=[df_calender.date.dt.month, "available"]).agg({'available': 'count'})
df_group_month = df_group_month.rename(columns={'available':'count'})
df_group_month = df_group_month.reset_index()
df_group_month = df_group_month.rename(columns={'date':'month'})
# Create new dataframe
df_available = pd.DataFrame(columns=['month', 'percent'])
for month in df_group_month['month'].unique():
sum_total = df_group_month.loc[df_group_month['month'] == month, 'count'].sum()
available_total = df_group_month.loc[(df_group_month['month'] == month) &
(df_group_month['available'] == 'Available'), 'count'].sum()
not_available_total = df_group_month.loc[(df_group_month['month'] == month) &
(df_group_month['available'] == 'Not available'), 'count'].sum()
available_percent = round(((available_total-not_available_total)/sum_total)*100, 2)
df_available = df_available.append({'month': month, 'percent': available_percent}, ignore_index=True)
# convert number of month into month name
df_available['month'] = df_available['month'].apply(lambda x: calendar.month_abbr[int(x)])
```
Visualizing percentages by month from the cell above.
```
ax = sns.barplot(data = df_available, x = 'month', y='percent');
ax.set(xlabel='Month', ylabel='Percentage')
plt.title('Available Percentage by month in 2016');
```
### Question 2 : Which month is the most expensive ?
Analyzing average price by month can show the cost variance.
```
# Grouping month with price mean
month = df_calender.date.dt.month
monthly_avg=df_calender.groupby(month).price.mean()
monthly_avg = monthly_avg.reset_index()
# map number of month with month name
d = {
1: 'Jan',
2: 'Feb',
3: 'Mar',
4: 'Apr',
5: 'May',
6: 'Jun',
7: 'Jul',
8: 'Aug',
9: 'Sep',
10: 'Oct',
11: 'Nov',
12: 'Dec'
}
monthly_avg = monthly_avg.rename(columns={"date": "month"})
monthly_avg["month"] = monthly_avg["month"].map(d)
```
Visualizing trends from summarizing data above.
```
ax = sns.lineplot(x="month", y="price", data=monthly_avg)
ax.set(xlabel='Month', ylabel='Average price in USD')
```
This chart shows what the average price get highest around June and August.
### Question 3 : Which day is the most costly?
Analyze price in day groups to find trends that can answer the questions
```
# Find mean of price in each day
day = df_calender.date.dt.dayofweek
day_avg = df_calender.groupby(day).price.mean()
daily_avg = day_avg.reset_index()
# Mapping number of days to day name
d = {
0: 'Monday',
1: 'Tuesday',
2: 'Wednesday',
3: 'Thursday',
4: 'Friday',
5: 'Saturday',
6: 'Sunday'
}
daily_avg["date"] = daily_avg["date"].map(d)
```
Visualize average price by day in 2016
```
ax = sns.lineplot(x="date", y="price", data=daily_avg)
ax.set(xlabel='Day', ylabel='Average price in USD')
```
The chart shows the high booking costs on Friday and Saturday
### Question 4 : Is the host respond to you appropriately?
Visualize response time categorial
```
cat_order = df_listings['host_response_time'].value_counts().index
sns.countplot(data = df_listings, x = 'host_response_time', order=cat_order)
plt.title('Host response time');
```
This chart displays that most common hosts respond within an hour, and renters have a few chances to answer more than a day.
**Visualize host reponse rate compared with response time**
```
response_rate_percent = df_listings['host_response_rate']*100
ax = sns.boxplot(x="host_response_time", y=response_rate_percent,
data=df_listings)
ax.set(xlabel='Response time', ylabel='Response rate')
```
Primarily, hosts always respond to the traveller but still has a little chance to have no response.
**The visualization shows the response time and price to find the pattern of responses time.**
```
ax = sns.boxplot(x="host_response_time", y="price",
data=df_listings)
ax.set(xlabel='Response time', ylabel='Price')
sns.despine(offset=10, trim=True)
```
It seems to have a few differences and no significant value for each response time categorial on price
#### Qustion 5 : What features were affected the price?
Visualize the correlation between features
```
corr = df_listings.corr()
kot = corr[corr.apply(lambda x: abs(x)>=0)]
sns.heatmap(kot, annot = True, fmt = '.2f', center = 0, cmap="Blues")
plt.title('Features Correlation');
plt.xticks(rotation = 15);
```
This heatmap shows the relation between features, and we got the dark blue grids that show significant ties.
```
```
| github_jupyter |
```
#Relevant video:
#http://www.youtube.com/watch?v=VIt2z6zJrMs&t=1m52s
#My output from code:
#https://www.youtube.com/watch?v=E_yE2Q0ArpM
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
import scipy.integrate as ing
d2r = np.pi/180. #deg to radian
k2f = 1.68781 #knots to ft per sec
kilo=10.**3
###--Import and basic data manipulation--####
#-Load dataset 1-#
a1 = np.loadtxt('./dumb2.dat') #My data: uses Youtube timestamp
#Source:
#https://drive.google.com/file/d/0ByW1n-WOmDAEeDlPSGc1UHpDcWs/view?usp=sharing
a1=a1.T
#index map:: 0:time[s], 1:aspd[kt], 2:alt[kft], 3:Mach#?, 4:G, 5:Ptich[deg]
a1[2]=a1[2]*kilo #convert from kiloft to ft
#Break velocity into components and convert to f/s
xv1=np.multiply(a1[1],np.cos(d2r*a1[5])*k2f)
yv1=np.multiply(a1[1],np.sin(d2r*a1[5])*k2f)
#-Load dataset 2-#
a2 = np.loadtxt('./dumb5.dat') #from reddit.com/user/what_are_you_saying
#Original Source:
#https://drive.google.com/file/d/0B0DNIvRXrB1jeWdBYkVkZ1ByOXM/view
#reformatted source for correct pitch data:
#https://drive.google.com/file/d/0ByW1n-WOmDAEY2ZOZFZQQXdnZ0k/view?usp=sharing
a2=a2.T
#index map:: 0:time[s], 1:aspd[kt], 2:alt[ft], 3:Ptich[deg]
xv2=np.multiply(a2[1],np.cos(d2r*a2[3])*k2f)
yv2=np.multiply(a2[1],np.sin(d2r*a2[3])*k2f)
###--Numerically integrate to get position, using 2 methods
# Simpson & CumulativeTrapezoid methods, (whoever
# named that function didnt think about phrasing)--###
#dataset 1
n = len(a1[0])
xp11 = np.empty(n)
xp12 = np.empty(n)
yp11 = np.empty(n)
yp12 = np.empty(n)
yp12[0]=yp11[0]=xp11[0]=xp12[0]=0.
for i in range(1,n):
xp11[i]=ing.simps(xv1[0:i],a1[0,0:i])
yp11[i]=ing.simps(yv1[0:i],a1[0,0:i])
xp12=ing.cumtrapz(xv1,a1[0],initial=0.)
yp12=ing.cumtrapz(yv1,a1[0],initial=0.)
#dataset 2
n = len(a2[0])
xp21 = np.empty(n)
xp22 = np.empty(n)
yp21 = np.empty(n)
yp22 = np.empty(n)
yp22[0]=yp21[0]=xp21[0]=xp22[0]=0.
for i in range(1,n):
xp21[i]=ing.simps(xv2[0:i],a2[0,0:i])
yp21[i]=ing.simps(yv2[0:i],a2[0,0:i])
xp22=ing.cumtrapz(xv2,a2[0],initial=0.)
yp22=ing.cumtrapz(yv2,a2[0],initial=0.)
###--Create idealized circular trajectory--###
#Circle trajectory parameters:
xcirc=3800 #xcenter
ycirc=13550 #ycenter
radc=3000 #radius circle
xstart=0. #start x val
xend=10000. #ending x val
ystart=ycirc - radc
nc=60 #data points in circle, only make multiple of 10!
#get circl points starting at bottom
def circlepts(xc,yc,r,frac):
yret=r*np.sin((frac-0.25)*2*np.pi)+yc
xret=r*np.cos((frac-0.25)*2*np.pi)+xc
return (xret, yret)
xpts = np.empty(nc)
ypts = np.empty(nc)
for i in range(0,nc):
xpts[i], ypts[i] = circlepts(xcirc,ycirc,radc,float(i)/float(nc))
xlin1= np.empty(nc/10)
ylin1= np.empty(nc/10)
xlin2= np.empty(nc/10)
ylin2= np.empty(nc/10)
delx=float(xcirc-xstart)/float(nc/10)
delx2=float(xend-xcirc)/float(nc/10)
for i in range(0,nc/10):
xlin1[i]=xstart + i*delx
ylin1[i]=ystart
xlin2[i]=xcirc + (i+1)*delx2
ylin2[i]=ystart
xtraj=np.concatenate((xlin1,xpts,xlin2))
ytraj=np.concatenate((ylin1,ypts,ylin2))
###--Comparison plots of data available and analysis methods--###
plt.plot(xp11,a1[2],label='1) Simps vs Alt')
plt.plot(xp12,a1[2],label='1) CumTrap vs Alt')
plt.plot(xp12,yp12+a1[2,0],label='1) CumTrap vs CumTrap')
plt.plot(xp21,a2[2],label='2) Simps vs Alt')
plt.plot(xp22,a2[2],label='2) CumTrap vs Alt')
plt.plot(xp22,yp22+a2[2,0],label='2) CumTrap vs CumTrap')
plt.plot(xtraj,ytraj,label='Idealized circular Traj')
plt.axis([0,12000,8000,20000])
plt.axis("equal")
plt.legend()
plt.show()
fig, ax = plt.subplots()
plt.axis("equal") #keeps plot on a 1:1 x:y scale
plt.axis([-1000,12000,8000,20000]) #Plot ranges
xjet=xp11
yjet=a1[2]
line, = ax.plot(xjet[0:3],yjet[0:3],label='Actual Trajectory')
line2, = ax.plot(xtraj[0],ytraj[0],label='Circular Trajectory')
plt.legend() #Comment to remove legend
xlen=len(xjet)
clen=len(xtraj)
def animate(i):
if(i < 2*xlen and (i%2)==0): #Plot the actual trajectory first
line.set_xdata(xjet[0:i/2]) #Only go every 2 to be slower
line.set_ydata(yjet[0:i/2])
elif(i< (2*xlen+clen) and i > 2*xlen): #Plot the circle trajectory second
line2.set_xdata(xtraj[0:i-2*xlen+1])
line2.set_ydata(ytraj[0:i-2*xlen+1])
return (line,line2)
# Init only required for blitting to give a clean slate.
def init():
line.set_ydata(np.ma.array(yjet, mask=True))
line.set_xdata(np.ma.array(xjet, mask=True))
line2.set_ydata(np.ma.array(ytraj, mask=True))
line2.set_xdata(np.ma.array(xtraj, mask=True))
return (line, line2)
ani = animation.FuncAnimation(fig, animate, np.arange(4, (2*xlen+clen)), init_func=init,
interval=120, blit=False)
plt.show() #Comment to not show animation
#Save file method
#ani.save(filename='test4.mpeg', writer="ffmpeg", fps=30, dpi=140, codec=None, bitrate=8000, extra_args=None, metadata=None, extra_anim=None, savefig_kwargs=None)
```
| github_jupyter |
# Pandeia for WFIRST Imaging
How to cite this code:
> Klaus M. Pontoppidan ; Timothy E. Pickering ; Victoria G. Laidler ; Karoline Gilbert ; Christopher D. Sontag, et al.
"Pandeia: a multi-mission exposure time calculator for JWST and WFIRST", Proc. SPIE 9910, Observatory Operations: Strategies, Processes, and Systems VI, 991016 (July 15, 2016); doi:10.1117/12.2231768; http://dx.doi.org/10.1117/12.2231768
This is an introductory notebook that provides an easy-to-use interface for making Pandeia ETC calculations. This notebook only supports WFIRST imaging and has simplified some configuration options.
Refer to the documentation links provided within the *Help* menu for general information on the Jupyter/IPython notebook interface and useful keyboard short-cuts. The key things you need to know are that you must use ``Shift-Enter`` to execute a cell and that once a cell is executed, all data defined within it becomes available to all other cells. (You can also click the <i class="fa-step-forward fa"></i> icon in the toolbar to run a cell.)
This first cell sets up the imports and configuration that are required:
```
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import sys
import os
import numpy as np
import matplotlib
from matplotlib import style
style.use('ggplot') # see http://matplotlib.org/users/style_sheets.html
# for info on matplotlib styles
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['image.origin'] = 'lower'
import matplotlib.pyplot as plt
# the first pandeia import is required to run the GUI. the others are provided to
# allow manual running of calculations and loading/saving of inputs or results.
from toolbox.etc.gui import PandeiaWFIRSTCalculator
from pandeia.engine.perform_calculation import perform_calculation
from pandeia.engine.io_utils import read_json, write_json
```
The next cell instantiates and runs the GUI. The inputs for ``Source type`` and ``SED type`` will change dynamically depending on which drop-down entry is selected. For simplicity's sake, only a single source at a time is currently supported.
This source can either be a point source or extended. Extended sources require extra configuration:
- Surface brightness profile
- Gaussian — $I \propto e^{r^{-2}}$
- Exponential — $I \propto e^{r^{-1}}$
- de Vaucouleurs — $I \propto e^{r^{-0.25}}$
- Major axis scale length of the surface brightness profile (sigma in the case of Gaussian)
- Ellipticity of the source
- Position angle of the major axis (measured CCW from horizontal)
Source flux can currently only be specified in $F_{\nu}$ units such as ``mJy`` or AB magnitudes at a specific wavelength.
There are several options for configuring the spectral energy distribution (SED) of the source:
- Power-law — $F \propto \lambda^{index}$
- Blackbody
- Star — As calculated from the Phoenix models. Each entry shows the spectral type, $T_{eff}$, and $log\ g$ used.
- Extragalactic — Compiled from the Brown et al. (2014) catalog of integrated galaxy spectra
In each case, the specified redshift is applied to the SED.
Currently, the WFIRST wide-field imager (WFI) is the only available instrument. Its configuration parameters are:
- Filter
- Readmode — Currently modeled after JWST's NIRCam and specifies how many reads/skips there are per group
- Subarray — Geometry of the region of the detector being read-out
- Groups — Number of groups per integration
- Integrations — Number of integrations to perform
- Exposures — Number of sets of integrations to perform
The extracted flux and signal-to-noise ratio are calculated via aperture photometry. The source region is circular with the configured radius and the background region is annular with the configured inner and outer radii. The GUI automatically checks these radii to prevent overlap. For example, if you increase aperture radius, the annulus radii will automatically adjust accordingly. The display of these radii on the 2D plots can be toggled by clicking *Overlay*.
To run the calculation, click *Calculate* and the results will be displayed below. You can select what to display in the plots via the two pull-downs: *1D Plot* and *2D Image*.
```
g = PandeiaWFIRSTCalculator()
g.display()
```
It is possible to extract the full input and output information from the ETC calculation. The ETC input and output are both stored as dictionaries, which can be directly manipulated.
```
g.run_engine()
calculation = g.calculation_input
result = g.calculation_result
```
As an example, we will create a new source and add it to our ETC scene. First, we import a method to create a default source.
```
from pandeia.engine.calc_utils import build_default_source
s = build_default_source()
```
Then, we move the source up by 1 arcsecond, change its flux to ABmag = 24 and make it extended.
```
s['spectrum']['normalization']['norm_fluxunit'] = 'abmag'
s['spectrum']['normalization']['norm_flux'] = 21.
s['shape']['geometry'] = 'sersic'
s['shape']['sersic_index'] = 1. # exponential disk
s['shape']['major'] = 0.4 # major axis in arcseconds
s['shape']['minor'] = 0.1 # minor axis in arcseconds
s['position']['y_offset'] = 1.0 # offset in arcseconds
s['position']['orientation'] = 23. # Orientation relative to horizontal in degrees
```
A scene is just a list of sources, so we append the new source we just made.
```
calculation['scene'].append(s)
```
And make a new calculation.
```
r = perform_calculation(calculation)
```
If we add the result of the calculation to the GUI, we can see everything plotted again.
```
g.calculation_result = r
plt.imshow(g.calculation_result['2d']['detector'])
g.calculation_input
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 3: Combining determinism and stochasticity
**Week 2, Day 2: Linear Systems**
**By Neuromatch Academy**
**Content Creators**: Bing Wen Brunton, Alice Schwarze, Biraj Pandey
**Content Reviewers**: Norma Kuhn, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
*Estimated timing of tutorial: 45 minutes*
Time-dependent processes rule the world.
Now that we've spent some time familiarizing ourselves with the behavior of such systems when their trajectories are (1) entirely predictable and deterministic, or (2) governed by random processes, it's time to consider that neither is sufficient to describe neuroscience. Instead, we are often faced with processes for which we know some dynamics, but there is some random aspect as well. We call these **dynamical systems with stochasticity**.
This tutorial will build on our knowledge and gain some intuition for how deterministic and stochastic processes can both be a part of a dynamical system by:
* Simulating random walks
* Investigating the mean and variance of a Ornstein-Uhlenbeck (OU) process
* Quantifying the OU process's behavior at equilibrium.
```
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/snv4m/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
# drift-diffusion model
# returns t, x
def plot_random_walk_sims(sims, nsims=10):
"""Helper for exercise 3A"""
fig = plt.figure()
plt.plot(sim[:nsims, :].T)
plt.xlabel('time')
plt.ylabel('position x')
plt.show()
def plot_mean_var_by_timestep(mu, var):
"""Helper function for exercise 3A.2"""
fig, (ah1, ah2) = plt.subplots(2)
# plot mean of distribution as a function of time
ah1.plot(mu)
ah1.set(ylabel='mean')
ah1.set_ylim([-5, 5])
# plot variance of distribution as a function of time
ah2.plot(var)
ah2.set(xlabel='time')
ah2.set(ylabel='variance')
plt.show()
def plot_ddm(t, x, xinfty, lam, x0):
fig = plt.figure()
plt.plot(t, xinfty * (1 - lam**t) + x0 * lam**t, 'r')
plt.plot(t, x, 'k.') # simulated data pts
plt.xlabel('t')
plt.ylabel('x')
plt.legend({'deterministic solution', 'simulation'})
plt.show()
def var_comparison_plot(empirical, analytical):
fig = plt.figure()
plt.plot(empirical, analytical, '.', markersize=15)
plt.xlabel('empirical equilibrium variance')
plt.ylabel('analytic equilibrium variance')
plt.plot(np.arange(8), np.arange(8), 'k', label='45 deg line')
plt.legend()
plt.grid(True)
plt.show()
def plot_dynamics(x, t, lam, xinfty=0):
""" Plot the dynamics """
fig = plt.figure()
plt.title('$\lambda=%0.1f$' % lam, fontsize=16)
x0 = x[0]
plt.plot(t, xinfty + (x0 - xinfty) * lam**t, 'r', label='analytic solution')
plt.plot(t, x, 'k.', label='simulation') # simulated data pts
plt.ylim(0, x0+1)
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
plt.show()
```
---
# Section 1: Random Walks
```
# @title Video 1: E. coli and Random Walks
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1LC4y1h7gD", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="VHwTBCQJjfw", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
To begin, let's first take a gander at how life sometimes wanders around aimlessly. One of the simplest and best-studied living systems that has some interesting behaviors is the _E. coli_ bacterium, which is capable of navigating odor gradients on a substrate to seek a food source. Larger life (including flies, dogs, and blindfolded humans) sometimes use the same strategies to guide their decisions.
Here, we will consider what the _E. coli_ does in the absence of food odors. What's the best strategy when one does not know where to head? Why, flail around randomly, of course!
The **random walk** is exactly that --- at every time step, use a random process like flipping a coin to change one's heading accordingly. Note that this process is closely related to _Brownian motion_, so you may sometimes hear that terminology used as well.
Let's start with a **one-dimensional random walk**. A bacterium starts at $x=0$. At every time step, it flips a coin (a very small, microscopic coin of protein mintage), then heads left $\Delta x = -1$ or right $\Delta x = +1$ for with equal probability. For instance, if at time step $1$ the result of the coin flip is to head right, then its position at that time step becomes $x_1 = x_0 + \Delta x = 1.$ Continuing in this way, its position at time step $k+1$ is given by
$$x_{k+1} = x_k + \Delta x $$
We will simulate this process below and plot the position of the bacterium as a function of the time step.
```
# @markdown Execute to simulate a random walk
# parameters of simulation
T = 100
t = np.arange(T)
x = np.zeros_like(t)
np.random.seed(2020) # set random seed
# initial position
x[0] = 0
# step forward in time
for k in range(len(t)-1):
# choose randomly between -1 and 1 (coin flip)
this_step = np.random.choice([-1,1])
# make the step
x[k+1] = x[k] + this_step
# plot this trajectory
fig = plt.figure()
plt.step(t, x)
plt.xlabel('time')
plt.ylabel('position x');
```
## Coding Exercise 1A: Random walk simulation
*Referred to in video as exercise 3A*
In the previous plot, we assumed that the bacterium takes a step of size $1$ at every point in time. Let's let it take steps of different sizes!
We will code a random walk where the steps have a standard normal distribution (with mean $\mu$ and standard deviation $\sigma$). Instead of running one trajectory at a time, we will write our code so that we can simulate a large number of trajectories efficiently. We will combine this all into a function ``random_walk_simulator`` that generates $N$ random walks each with $T$ time points efficiently.
We will plot 10 random walks for 10000 time steps each.
```
def random_walk_simulator(N, T, mu=0, sigma=1):
'''Simulate N random walks for T time points. At each time point, the step
is drawn from a Gaussian distribution with mean mu and standard deviation
sigma.
Args:
T (integer) : Duration of simulation in time steps
N (integer) : Number of random walks
mu (float) : mean of step distribution
sigma (float) : standard deviation of step distribution
Returns:
(numpy array) : NxT array in which each row corresponds to trajectory
'''
###############################################################################
## TODO: Code the simulated random steps to take
## Hints: you can generate all the random steps in one go in an N x T matrix
raise NotImplementedError('Complete random_walk_simulator_function')
###############################################################################
# generate all the random steps for all steps in all simulations in one go
# produces a N x T array
steps = np.random.normal(..., ..., size=(..., ...))
# compute the cumulative sum of all the steps over the time axis
sim = np.cumsum(steps, axis=1)
return sim
np.random.seed(2020) # set random seed
# simulate 1000 random walks for 10000 time steps
sim = random_walk_simulator(1000, 10000, mu=0, sigma=1)
# take a peek at the first 10 simulations
plot_random_walk_sims(sim, nsims=10)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_4265c9d0.py)
*Example output:*
<img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/static/W2D2_Tutorial3_Solution_4265c9d0_0.png>
We see that the trajectories all look a little different from each other. But there are some general observations one can make: at the beginning almost all trajectories are very close to $x=0$, which is where our bacterium started. As time progresses, some trajectories move further and further away from the starting point. However, a lot of trajectories stay close to the starting point of $x=0$.
Now let's take a look in the next cell at the distribution of bacteria positions at different points in time, analyzing all the trajectories we just generated above.
```
# @markdown Execute to visualize distribution of bateria positions
fig = plt.figure()
# look at the distribution of positions at different times
for i, t in enumerate([1000,2500,10000]):
# get mean and standard deviation of distribution at time t
mu = sim[:, t-1].mean()
sig2 = sim[:, t-1].std()
# make a plot label
mytitle = '$t=${time:d} ($\mu=${mu:.2f}, $\sigma=${var:.2f})'
# plot histogram
plt.hist(sim[:,t-1],
color=['blue','orange','black'][i],
#make sure the histograms have the same bins!
bins=np.arange(-300,300,20),
# make histograms a little see-through
alpha=0.6,
# draw second histogram behind the first one
zorder=3-i,
label=mytitle.format(time=t, mu=mu, var=sig2))
plt.xlabel('position x')
# plot range
plt.xlim([-500, 250])
# add legend
plt.legend(loc=2)
# add title
plt.title(r'Distribution of trajectory positions at time $t$')
```
At the beginning of the simulation, the distribution of positions is sharply peaked about $0$. As time progresses, the distribution becomes wider but its center stays closer to $0$. In other words, the mean of the distribution is independent of time, but the variance and standard deviation of the distribution scale with time. Such a process is called a **diffusive process**.
## Coding Exercise 1B: Random walk mean & variance
Compute and then plot the mean and variance of our bacterium's random walk as a function of time.
```
# Simulate random walks
np.random.seed(2020) # set random seed
sim = random_walk_simulator(5000, 1000, mu=0, sigma=1)
##############################################################################
# TODO: Insert your code here to compute the mean and variance of trajectory positions
# at every time point:
raise NotImplementedError("Student exercise: need to compute mean and variance")
##############################################################################
# Compute mean
mu = ...
# Compute variance
var = ...
# Visualize
plot_mean_var_by_timestep(mu, var)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_796a6346.py)
*Example output:*
<img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/static/W2D2_Tutorial3_Solution_796a6346_0.png>
The expected value of $x$ stays close to 0, even for random walks of very long time. Cool!
The variance, on the other hand, clearly increases with time. In fact, the variance seems to increase linearly with time!
## Interactive Demo 1: Influence of Parameter Choice
How do the parameters $\mu$ and $\sigma$ of the Gaussian distribution from which we choose the steps affect the mean and variance of the bacterium's random walk?
```
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_gaussian(mean=(-0.5, 0.5, .02), std=(.5, 10, .5)):
sim = random_walk_simulator(5000, 1000, mu=mean, sigma=std)
# compute the mean and variance of trajectory positions at every time point
mu = np.mean(sim, axis=0)
var = np.var(sim, axis=0)
# make a figure
fig, (ah1, ah2) = plt.subplots(2)
# plot mean of distribution as a function of time
ah1.plot(mu)
ah1.set(ylabel='mean')
# plot variance of distribution as a function of time
ah2.plot(var)
ah2.set(xlabel='time')
ah2.set(ylabel='variance')
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_55aa7188.py)
---
# Section 2: The Ornstein-Uhlenbeck (OU) process
*Estimated timing to here from start of tutorial: 14 min*
```
# @title Video 2: Combining Deterministic & Stochastic Processes
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1o5411Y7N2", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="pDNfs5p38fI", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
The random walk process we just explored is diffusive, and the distribution of possible trajectories _spreads_, taking on increasing variance with time. Even so, at least in one dimension, the mean remains close to the initial value (in the example above, 0).
Our goal is now to build on this model to construct a **drift-diffusion** model (DDM). DDM is a popular model for memory, which as we all know, is often an exercise in hanging on to a value imperfectly. Decision-making and memory will be the topic for tomorrow, so here we build the mathematical foundations and develop some intuition for how such systems behave!
To build such a model, let's combine the random walk model with the first differential equations we explored in Tutorial 1 earlier. Although those models had been written in continuous time as $\dot{x} = a x$, here let's consider the discrete version of the same system and write:
$x_{k+1} = \lambda x_k$,
whose solution can be written as
$x_k = x_0 \lambda^k$,
where $x_0$ is the value of $x$ at time $t=0$.
Now, let's simulate and plot the solution of the discrete version of our first differential equation from Tutorial 1 below. **Run the code below.**
```
# parameters
lam = 0.9
T = 100 # total Time duration in steps
x0 = 4. # initial condition of x at time 0
# initiatialize variables
t = np.arange(0, T, 1.)
x = np.zeros_like(t)
x[0] = x0
# Step through in time
for k in range(len(t)-1):
x[k+1] = lam * x[k]
# plot x as it evolves in time
plot_dynamics(x, t, lam)
```
Notice that this process decays towards position $x=0$. We can make it decay towards any position by adding another parameter $x_\infty$. The rate of decay is proportional to the difference between $x$ and $x_\infty$. Our new system is
$x_{k+1} = x_\infty + \lambda(x_k - x_{\infty})$
We have to modify our analytic solution slightly to take this into account:
$x_k = x_\infty(1 - \lambda^k) + x_0 \lambda^k$.
Let's simulate and plot the dynamics of this process below. Hopefully, we see that it start at $x_0$ and decay towards $x_{\infty}.$
```
# parameters
lam = 0.9 # decay rate
T = 100 # total Time duration in steps
x0 = 4. # initial condition of x at time 0
xinfty = 1. # x drifts towards this value in long time
# initiatialize variables
t = np.arange(0, T, 1.)
x = np.zeros_like(t)
x[0] = x0
# Step through in time
for k in range(len(t)-1):
x[k+1] = xinfty + lam * (x[k] - xinfty)
# plot x as it evolves in time
plot_dynamics(x, t, lam, xinfty)
```
Now we are ready to take this basic, deterministic difference equation and add a diffusion process on top of it! Fun times in Python land.
As a point of terminology: this type of process is commonly known as a **drift-diffusion model** or **Ornstein-Uhlenbeck (OU) process**. The model is a combination of a _drift_ term toward $x_{\infty}$ and a _diffusion_ term that walks randomly. You may sometimes see them written as continuous stochastic differential equations, but here we are doing the discrete version to maintain continuity in the tutorial. The discrete version of our OU process has the following form:
$x_{k+1} = x_\infty + \lambda(x_k - x_{\infty}) + \sigma \eta$
where $\eta$ is sampled from a standard normal distribution ($\mu=0, \sigma=1$).
## Coding Exercise 2: Drift-diffusion model
Modify the code below so that each step through time has a _deterministic_ part (_hint_: exactly like the code above) plus a _random, diffusive_ part that is drawn from from a normal distribution with standard deviation of $\sigma$ (sig in the code). It will plot the dynamics of this process.
```
def simulate_ddm(lam, sig, x0, xinfty, T):
"""
Simulate the drift-diffusion model with given parameters and initial condition.
Args:
lam (scalar): decay rate
sig (scalar): standard deviation of normal distribution
x0 (scalar): initial condition (x at time 0)
xinfty (scalar): drift towards convergence in the limit
T (scalar): total duration of the simulation (in steps)
Returns:
ndarray, ndarray: `x` for all simulation steps and the time `t` at each step
"""
# initiatialize variables
t = np.arange(0, T, 1.)
x = np.zeros_like(t)
x[0] = x0
# Step through in time
for k in range(len(t)-1):
##############################################################################
## TODO: Insert your code below then remove
raise NotImplementedError("Student exercise: need to implement simulation")
##############################################################################
# update x at time k+1 with a determinstic and a stochastic component
# hint: the deterministic component will be like above, and
# the stochastic component is drawn from a scaled normal distribution
x[k+1] = ...
return t, x
lam = 0.9 # decay rate
sig = 0.1 # standard deviation of diffusive process
T = 500 # total Time duration in steps
x0 = 4. # initial condition of x at time 0
xinfty = 1. # x drifts towards this value in long time
# Plot x as it evolves in time
np.random.seed(2020)
t, x = simulate_ddm(lam, sig, x0, xinfty, T)
plot_ddm(t, x, xinfty, lam, x0)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_c67c12d7.py)
*Example output:*
<img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/static/W2D2_Tutorial3_Solution_c67c12d7_0.png>
## Think! 2: Drift-Diffusion Simulation Observations
Describe the behavior of your simulation by making some observations. How does it compare to the deterministic solution? How does it behave in the beginning of the stimulation? At the end?
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_301f6f83.py)
---
# Section 3: Variance of the OU process
*Estimated timing to here from start of tutorial: 35 min*
```
# @title Video 3: Balance of Variances
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV15f4y1R7PU", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="49A-3kftau0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
As we can see, the **mean** of the process follows the solution to the deterministic part of the governing equation. So far, so good!
But what about the **variance**?
Unlike the random walk, because there's a decay process that "pulls" $x$ back towards $x_\infty$, the variance does not grow without bound with large $t$. Instead, when it gets far from $x_\infty$, the position of $x$ is restored, until an equilibrium is reached.
The equilibrium variance for our drift-diffusion system is
Var $= \frac{\sigma^2}{1 - \lambda^2}$.
Notice that the value of this equilibrium variance depends on $\lambda$ and $\sigma$. It does not depend on $x_0$ and $x_\infty$.
To convince ourselves that things are behaving sensibly, let's compare the empirical variances of the equilibrium solution to the OU equations with the expected formula.
## Coding Exercise 3: Computing the variances empirically
Write code to compute the analytical variance: Var $= \frac{\sigma^2}{1 - \lambda^2}$, and compare against the empirical variances (which is already provided for you using the helper function). You should see that they should be about equal to each other and lie close to the 45 degree ($y=x$) line.
```
def ddm(T, x0, xinfty, lam, sig):
t = np.arange(0, T, 1.)
x = np.zeros_like(t)
x[0] = x0
for k in range(len(t)-1):
x[k+1] = xinfty + lam * (x[k] - xinfty) + sig * np.random.standard_normal(size=1)
return t, x
# computes equilibrium variance of ddm
# returns variance
def ddm_eq_var(T, x0, xinfty, lam, sig):
t, x = ddm(T, x0, xinfty, lam, sig)
# returns variance of the second half of the simulation
# this is a hack: assumes system has settled by second half
return x[-round(T/2):].var()
np.random.seed(2020) # set random seed
# sweep through values for lambda
lambdas = np.arange(0.05, 0.95, 0.01)
empirical_variances = np.zeros_like(lambdas)
analytical_variances = np.zeros_like(lambdas)
sig = 0.87
# compute empirical equilibrium variance
for i, lam in enumerate(lambdas):
empirical_variances[i] = ddm_eq_var(5000, x0, xinfty, lambdas[i], sig)
##############################################################################
## Insert your code below to calculate the analytical variances
raise NotImplementedError("Student exercise: need to compute variances")
##############################################################################
# Hint: you can also do this in one line outside the loop!
analytical_variances = ...
# Plot the empirical variance vs analytical variance
var_comparison_plot(empirical_variances, analytical_variances)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_b972f241.py)
*Example output:*
<img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/static/W2D2_Tutorial3_Solution_b972f241_0.png>
---
# Summary
*Estimated timing of tutorial: 45 minutes*
In this tutorial, we have built and observed OU systems, which have both deterministic and stochastic parts. We see that they behave, on average, similar to our expectations from analyzing deterministic dynamical systems.
Importantly, **the interplay between the deterministic and stochastic parts** serve to _balance_ the tendency of purely stochastic processes (like the random walk) to increase in variance over time. This behavior is one of the properties of OU systems that make them popular choices for modeling cognitive functions, including short-term memory and decision-making.
| github_jupyter |
# String
## `print()`
Fungsi `print()` mencetak seluruh argumennya sebagai *string*, dipisahkan dengan spasi dan diikuti dengan sebuah *line break*:
```
name = "Budi"
print("Hello World")
print("Hello", 'World')
print("Hello", name)
```
> Catatan: Fungsi untuk mencetak di Python 2.7 dan Python 3 berbeda. Di Python 2.7, kita tidak perlu menggunakan tanda kurung di sekitar argumennya (contoh: `print "Hello World"`).
```
print("Hello", "World")
```
Fungsi `print()` memiliki argumen opsional untuk mengontrol di mana dan bagaimana statemen yang diberikan akan dicetak. Di antaranya adalah:
- `sep`, yaitu pemisah antar kata (nilai *default*-nya adalah spasi)
- `end`, yaitu karakter yang akan ditambahkan di akhir statemen (nilai *default*-nya adalah `\n` (karakter *newline*))
```
print("Hello", "World", sep="...", end="!!")
print("Good", "Morning", "Everyone", sep="...", end=":)")
```
## Mengatur format string
Ada banyak metode yang dapat digunakan untuk mengatur format dan memanipulasi string. Beberapa metode tersebut akan ditunjukkan di sini.
*String concatenation* adalah penggabungan dari dua *string*. Perhatikan bahwa ketika kita melakukan penggabungan, tidak ada spasi di antara kedua *string*.
```
string1 = 'World'
string2 = '!'
print('Hello' + string1 + string2)
```
Operator `%` digunakan untuk melakukan format pada sebuah *string*, dengan cara menyisipkan nilai yang disertakan setelahnya. *String* tersebut harus memiliki penanda yang mengidentifikasikan di mana kita harus menyisipkan nilai tersebut. Penanda yang sering digunakan adalah:
- `%s`: string
- `%d`: integer
- `%f`: float
- `%o`: oktal
- `%x`: heksadesimal
- `%e`: eksponensial
```
print("Hello %s" % string1)
print("Actual Number = %d" %18)
print("Float of the number = %f" %18)
print("Octal equivalent of the number = %o" %18)
print("Hexadecimal equivalent of the number = %x" %18)
print("Exponential equivalent of the number = %e" %18)
```
Ketika kita merujuk ke lebih dari satu variabel, kita harus menggunakan tanda kurung. Nilai-nilai disisipkan sesuai dengan urutan mereka di dalam tanda kurung.
```
print("Hello %s%s The meaning of life is %d" % (string1, string2, 42))
```
## Metode-metode terkait string lainnya
Mengalikan sebuah *string* sebuah integer akan mengembalikan sebuah *string* dengan *string* asli yang diulang-ulang sebanyak nilai integer tersebut.
```
print("Hello World! " * 5)
```
*String* dapat ditransformasikan dengan menggunakan banyak fungsi:
```
s = "hello wOrld"
print(s.capitalize()) # mengubah seluruh huruf di string menjadi huruf kecil, kecuali huruf pertama yang menjadi huruf kapital
print(s.upper()) # mengubah seluruh huruf di string menjadi huruf besar
print(s.lower()) # mengubah seluruh huruf di string menjadi huruf kecil
print('|%s|'% " lots of space ".strip()) # menghilangkan spasi di awal dan akhir string
print("Hello World".replace("World", "Class")) # mengganti kata "World" dengan kata "Class"
```
Python juga menyediakan banyak fungsi yang dapat kita gunakan untuk melakukan pengecekan pada *string*.
```
s = "Hello World"
print("The length of '%s' is" %s, len(s), "characters") # len() memberikan panjang string
s.startswith("Hello") and s.endswith("World") # mengecek awal dan akhir
print("There are %d 'l's but only %d World in %s" % (s.count('l'), s.count('World'), s)) # menghitung huruf di sebuah string
print('"el" is at index', s.find('el'), "in", s) # mencari index potongan kata "el" di kalimat "Hello World"
s.find('ab') # mencari index potongan kata "ab" di kalimat "Hello World". Apabila tidak ditemukan, maka fungsi akan mengembalikan -1
```
## Operator untuk perbandingan string
*String* dapat dibandingkan satu sama lain sesuai dengan urutan leksikal/alfabet.
```
'abc' < 'bbc' <= 'bbc'
'abc' > 'def'
```
Kita dapat menggunakan `in` untuk mengecek apakah sebuah *string* merupakan potongan (*substring*) dari *string* lainnya.
```
"ABC" in "This is the ABC of Python"
```
## Mengakses bagian dari string
Kita dapat mengakses bagian dari *string* dengan menggunakan indeks dan kurung siku. Indeks dimulai dari 0.
```
s = '123456789'
print('The first character of', s, 'is', s[0])
print('The last character of', s, 'is', s[len(s)-1])
```
Indeks negatif dapat digunakan untuk memulai perhitungan dari belakang.
```
print('The first character of', s, 'is', s[-len(s)])
print('The last character of', s, 'is', s[-1])
```
*Substring* bisa didapatkan dengan menggunakan `a:b` untuk menandakan karakter dari indeks `a` sampai indeks `b-1`. Perhatikan bahwa karakter terakhir (indeks `b`) tidak diikutsertakan.
```
print("First three charcters", s[0:3])
print("Next three characters", s[3:6])
```
Indeks awal yang kosong menandakan awal *string* (sama dengan indeks 0), sementara indeks akhir yang kosong menandakan akhir *string*.
```
print("First three characters", s[:3])
print("Last three characters", s[-3:])
```
| github_jupyter |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 5. Interactive Pipelines</h2></div>
The plots built up over the first few tutorials were all highly interactive in the web browser, with interactivity provided by Bokeh plotting tools within the plots or in some cases by HoloViews generating a Bokeh widget to select for a `groupby` over a categorical variable. However, when you are exploring a dataset, you might want to see how _any_ aspect of the data or plot changes if varied interactively. Luckily, hvPlot makes it almost trivially easy to do this, so that you can very easily explore any parameter or setting in your code.
## Panel widgets
To do this, we will need a widget library, and here we will be using [Panel](https://panel.holoviz.org/) to generate Bokeh widgets under user control, just as hvPlot uses Panel to generate widgets for a `groupby` as shown previously. Let's first get ahold of a Panel widget to see how they work. Here, let's create a Panel floating-point number slider to specify an earthquake magnitude between zero and nine:
```
import panel as pn
pn.extension(sizing_mode='stretch_width')
mag_slider = pn.widgets.FloatSlider(name='Minimum Magnitude', start=0, end=9, value=6)
mag_slider
```
The widget is a JavaScript object, but there are bidirectional connections between JS and Python that let us see and change the value of this slider using its `value` parameter:
```
mag_slider.value
mag_slider.value = 7
```
#### Exercise
Try moving the slider around and rerunning the `mag_slider.value` above to access the current slider value. As you can see, you can easily get the value of any widget to use in subsequent cells, but you'd need to re-run any cell that accesses that value for it to get updated.
# hvPlot .interactive()
hvPlot provides an easy way to connect widgets directly into an expression you want to control.
First, let's read in our data:
```
import numpy as np
import pandas as pd
import holoviews as hv
import hvplot.pandas # noqa
%%time
df = pd.read_parquet('../data/earthquakes-projected.parq')
df = df.set_index('time').tz_localize(None)
```
Now, let's do a little filtering that we might want to control with such a widget, such as selecting the highest-magnitude events:
```
from holoviews.element.tiles import WEB_MERCATOR_LIMITS
df2 = df[['mag', 'depth', 'latitude', 'longitude', 'place', 'type']][df['northing'] < WEB_MERCATOR_LIMITS[1]]
df2[df2['mag'] > 5].head()
```
What if instead of '5', we want the output above always to reflect the current value of `mag_slider`? We can do that by using hvPlot's `.interactive()` support, passing in a widget almost anywhere we want in a pipeline:
```
dfi = df2.interactive()
dfi[dfi['mag'] > mag_slider].head()
```
Here, `.interactive` is a wrapper around your DataFrame or Xarray object that lets you provide Panel widgets almost anywhere you'd otherwise be using a number. Just as importing `hvplot.pandas` provides a `.hvplot()` method or object on your dataframe, it also provides a `.interactive` method or object that gives you a general-purpose *interactive* `Dataframe` driven by widgets. `.interactive` stores a copy of your pipeline (series of method calls or other expressions on your data) and dynamically replays the pipeline whenever that widget changes.
`.interactive` supports just about any output you might want to get out of such a pipeline, such as text or numbers:
```
dfi[dfi['mag'] > mag_slider].shape
```
Or Matplotlib plots:
```
dfi[dfi['mag'] > mag_slider].plot(y='depth', kind='hist', bins=np.linspace(0, 50, 51))
```
Each time you drag the widget, hvPlot replays the pipeline and updates the output shown.
Of course, `.interactive` also supports `.hvplot()`, here with a new copy of a widget so that it will be independent of the other cells above:
```
mag_slider2 = pn.widgets.FloatSlider(name='Minimum magnitude', start=0, end=9, value=6)
dfi[dfi['mag'] > mag_slider2].hvplot(y='depth', kind='hist', bins=np.linspace(0, 50, 51))
```
You can see that the depth distribution varies dramatically as you vary the minimum magnitude, with the lowest magnitude events apparently only detectable at short depths. There also seems to be some artifact at depth 10, which is the largest bin regardless of the filtering for all but the largest magnitudes.
## Date widgets
A `.interactive()` pipeline can contain any number of widgets, including any from the Panel [reference gallery](https://panel.holoviz.org/reference/index.html#widgets). For instance, let's make a widget to specify a date range covering the dates found in this data:
```
date = pn.widgets.DateRangeSlider(name='Date', start=df.index[0], end=df.index[-1])
date
```
Now we can access the value of this slider:
```
date.value
```
As this widget is specifying a range, this time the value is returned as a tuple. If you prefer, you can get the components of the tuple directly via the `value_start` and `value_end` parameters respectively:
```
f'Start is at {date.value_start} and the end is at {date.value_end}'
```
Once again, try specifying different ranges with the widgets and rerunning the cell above.
Now let's use this widget to expand our expression to filter by date as well as magnitude:
```
mag = pn.widgets.FloatSlider(name='Minimum magnitude', start=0, end=9, value=6)
filtered = dfi[
(dfi['mag'] > mag) &
(dfi.index >= date.param.value_start) &
(dfi.index <= date.param.value_end)]
filtered.head()
```
You can now use either the magnitude or the date range (or both) to filter the data, and the output will update. Note that here you want to move the start date of the range slider rather than the end; otherwise, you may not see the table change because the earthquakes are displayed in date order.
#### Exercise
To specify the minimum earthquake magnitude, notice that we supplied the whole `mag` widget but `.interactive()` used only the `value` parameter of this widget by default. To be explicit, you may use `mag.param.value` instead if you wish. Try it!
#### Exercise
For readability, seven columns were chosen before displaying the `DataFrame`. Have a look at `df.columns` and pick a different set of columns for display.
## .interactive() and HoloViews
`.interactive()` lets you work naturally with the compositional HoloViews plots provided by `.hvplot()`. Here, let's combine such plots using the HoloViews `+` operator:
```
mag_hist = filtered.hvplot(y='mag', kind='hist', responsive=True, min_height=200)
depth_hist = filtered.hvplot(y='depth', kind='hist', responsive=True, min_height=200)
mag_hist + depth_hist
```
These are the same two histograms we saw earlier, but now we can filter them on data dimensions like `time` that aren't even explicitly shown in the plot, using the Panel widgets.
## Filtering earthquakes on a map
To display the earthquakes on a map, we will first create a subset of the data to make it quick to update without needing Datashader.:
```
subset_df = df[
(df.northing < WEB_MERCATOR_LIMITS[1]) &
(df.mag > 4) &
(df.index >= pd.Timestamp('2017-01-01')) &
(df.index <= pd.Timestamp('2018-01-01'))]
```
Now we can make a new interactive `DataFrame` from this new subselection:
```
subset_dfi = subset_df.interactive(sizing_mode='stretch_width')
```
And now we can declare our widgets and use them to filter the interactive `DataFrame` as before:
```
date_subrange = pn.widgets.DateRangeSlider(
name='Date', start=subset_df.index[0], end=subset_df.index[-1])
mag_subrange = pn.widgets.FloatSlider(name='Magnitude', start=3, end=9, value=3)
filtered_subrange = subset_dfi[
(subset_dfi.mag > mag_subrange) &
(subset_dfi.index >= date_subrange.param.value_start) &
(subset_dfi.index <= date_subrange.param.value_end)]
```
Now we can plot the earthquakes on an ESRI tilesource, including the filtering widgets as follows:
```
geo = filtered_subrange.hvplot(
'easting', 'northing', color='mag', kind='points',
xaxis=None, yaxis=None, responsive=True, min_height=500, tiles='ESRI')
geo
```
You'll likely notice some flickering as Panel updates the display when the widgets change in value. The flickering comes because the entire plot gets recreated each time the widget is dragged. You can get finer control over such updates, but doing so requires more advanced methods covered in later tutorials, so here, we will just accept that the plot flickers.
## Terminating methods for `.interactive`
The examples above all illustrate cases where you can display the output of `.interactive()` and not worry about its type, which is no longer a DataFrame or a HoloViews object, but an `Interactive` object:
```
type(geo)
```
What if you need to work with some part of the interactive pipeline, e.g. to feed it to some function or object that does not understand `Interactive` objects? In such a case, you can use what is called a `terminating method` on your Interactive object to get at the underlying object for you to use.
For instance, let's create magnitude and depth histograms on this subset of the data as in an earlier notebook and see if we can enable linked selections on them:
```
mag_subhist = filtered_subrange.hvplot(y='mag', kind='hist', responsive=True, min_height=200)
depth_subhist = filtered_subrange.hvplot(y='depth', kind='hist', responsive=True, min_height=200)
combined = mag_subhist + depth_subhist
combined
```
Note that this looks like a HoloViews layout with some widgets, but this object is *not* a HoloViews object. Instead it is still an `Interactive` object:
```
type(combined)
```
`link_selections` does not currently understand `Interactive` objects, and so it will raise an exception when given one. If we need a HoloViews `Layout`, e.g. for calling `link_selections`, we can build a layout from the constituent objects using the `.holoviews()` terminating method on `Interactive`:
```
layout = mag_subhist.holoviews() + depth_subhist.holoviews()
layout
```
This is now a HoloViews object, so we can use it with `link_selections`:
```
print(type(layout))
ls = hv.link_selections.instance()
ls(mag_subhist.holoviews()) + ls(depth_subhist.holoviews())
```
You can use the box selection tool to see how selections compare between these plots. However, you will note that the widgets are no longer displayed. To address this, we can display the widgets separately using a different terminating method, namely `.widgets()`:
```
filtered_subrange.widgets()
```
For reference, the terminating methods for an `Interactive` object are:
- `.holoviews()`: Give me a HoloViews object
- `.panel()`: Give me a Panel ParamFunction
- `.widgets()`: Give me a layout of widgets associated with this interactive object
- `.layout()`: Give me the layout of the widgets and display `pn.Column(obj.widgets(), obj.panel())` where `pn.Column` will be described in the [Dashboards notebook](./06_Dashboards.ipynb).
## Conclusion
Using the techniques above, you can build up a collection of plots, and other outputs with Panel widgets to control individual bits of computation and display.
What if you want to collect these pieces and put them together into a coherent app or dashboard? If so, then the next tutorial will show you how to do so!
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = np.linspace(0, 5, 11)
y = x ** 2
x
y
#Functional
plt.plot(x,y,"r")
plt.xlabel("X Axis")
plt.ylabel("Y Axis")
plt.title("Title")
plt.show()
plt.subplot(1,2,1)
plt.plot(x,y,"r-")
plt.subplot(1,2,2)
plt.plot(y,x,"g*-")
# OOP Method
fig = plt.figure()
axes = fig.add_axes([.1,.1,.5,.5])
axes.plot(x,y)
fig = plt.figure()
axes = fig.add_axes([.1,.1,1,1])
axes.plot(x,y)
fig = plt.figure()
axes = fig.add_axes([1,1,1,1])
axes.plot(x,y)
axes.set_xlabel("X Axis")
axes.set_ylabel("Y Axis")
axes.set_title("The Title")
fig = plt.figure()
axes1 = fig.add_axes([.1,.1,1,1])
axes2 = fig.add_axes([.18,.53,.5,.5])
axes1.plot(x,y,"m")
axes2.plot(y,x,"r")
axes1.set_xlabel("X1 Axis")
axes1.set_ylabel("Y1 Axis")
axes1.set_title("Title1")
axes2.set_xlabel("X2 Axis")
axes2.set_ylabel("Y2 Axis")
axes2.set_title("Title2")
fig,axes = plt.subplots()
axes.plot(x,y,"r")
axes.set_xlabel("X Axis")
axes.set_ylabel("Y Axis")
axes.set_title("The Title")
fig,axes = plt.subplots(1,2)
axes[0].plot(x,y,"y")
axes[1].plot(y,x,"b")
for i in axes:
i.set_xlabel("X")
i.set_ylabel("Y")
i.set_title("Title")
plt.tight_layout()
axes
fig = plt.figure(figsize=(5,3),dpi=100)
axes = fig.add_axes([1,1,1,1])
axes.plot(x,y,"r")
fig,axes = plt.subplots(nrows=2,ncols=1,figsize = (3,3))
axes[0].plot(y,x)
axes[1].plot(x,y)
plt.tight_layout()
fig.savefig("filename.png")
fig.savefig("filename.png",dpi = 140)
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(5,5))
axes[0].plot(x,y,"r",label="Y=X**2")
axes[1].plot(y,x,"y",label="Y=X**(0.5)")
axes[0].legend()
axes[1].legend()
for i in axes:
i.set_xlabel("X")
i.set_ylabel("Y")
i.set_title("Title")
plt.tight_layout()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x, x**2, label="x**2")
ax.plot(x, x**3, label="x**3")
ax.legend(loc=0)
fig, ax = plt.subplots()
ax.plot(x, x**2, 'b.-') # blue line with dots
ax.plot(x, x**3, 'g--')
fig,ax = plt.subplots()
ax.plot(x,x+1,"black",ls="-.",alpha=0.5)
ax.plot(x,x+2,"b--")
fig, ax = plt.subplots()
ax.plot(x, x+1, color="blue", alpha=0.5)
ax.plot(x, x+2, color="#8B008B")
ax.plot(x, x+3, color="#FF8C00")
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x, x+1, color="red", linewidth=0.25)
ax.plot(x, x+2, color="red", linewidth=0.50)
ax.plot(x, x+3, color="red", linewidth=1.00)
ax.plot(x, x+4, color="red", linewidth=2.00)
ax.plot(x, x+5, color="green", lw=3, linestyle='-')
ax.plot(x, x+6, color="green", lw=3, ls='-.')
ax.plot(x, x+7, color="green", lw=3, ls=':')
line, = ax.plot(x, x+8, color="black", lw=1.50)
line.set_dashes([5, 10, 15, 10])
ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+')
ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o')
ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s')
ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1')
ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2)
ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4)
ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red")
ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green");
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
plt.scatter(x,y)
from random import sample
data = sample(range(1, 1000), 100)
plt.hist(data)
data = [np.random.normal(0, std, 100) for std in range(1, 4)]
plt.boxplot(data,vert=True,patch_artist=True);
```
| github_jupyter |
# Cell Basic Filtering
## Content
The purpose of this step is to get rid of cells having **obvious** issues, including the cells with low mapping rate (potentially contaminated), low final reads (empty well or lost a large amount of DNA during library prep.), or abnormal methylation fractions (failed in bisulfite conversion or contaminated).
We have two principles when applying these filters:
1. **We set the cutoff based on the distribution of the whole dataset**, where we assume the input dataset is largely successful (mostly > 80-90% cells will pass QC). The cutoffs below are typical values we used in brain methylome analysis. Still, you may need to adjust cutoffs based on different data quality or sample source.
2. **The cutoff is intended to be loose.** We do not use stringent cutoffs here to prevent potential data loss. Abormal cells may remain after basic filtering, and will likely be identified in the analysis based filtering (see later notebooks about doublet score and outliers in clustering)
## Input
- Cell metadata table that contains mapping metric for basic QC filtering.
## Output
- Filtered cell metadata table that contains only cells passed QC.
## About Cell Mapping Metrics
We usually gather many mapping metrics from each processing step, but not all of the metrics are relevant to the cell filtering. Below are the most relevant metrics that we use to filter cells. The name of these metrics might be different in your dataset. Change it according to the file you have.
If you use [YAP](https://hq-1.gitbook.io/mc) to do mapping, you can find up-to-date mapping metrics documentation for [key metrics](https://hq-1.gitbook.io/mc/mapping-metrics/key-mapping-metrics) and [all metrics](https://hq-1.gitbook.io/mc/mapping-metrics/all-mapping-metrics) in YAP doc.
## Import
```
import pandas as pd
import seaborn as sns
sns.set_context(context='notebook', font_scale=1.3)
```
## Parameters
```
# change this to the path to your metadata
metadata_path = '../../../data/Brain/snmC-seq2/HIP.CellMetadata.csv.gz'
# Basic filtering parameters
mapping_rate_cutoff = 0.5
mapping_rate_col_name = 'MappingRate' # Name may change
final_reads_cutoff = 500000
final_reads_col_name = 'FinalReads' # Name may change
mccc_cutoff = 0.03
mccc_col_name = 'mCCCFrac' # Name may change
mch_cutoff = 0.2
mch_col_name = 'mCHFrac' # Name may change
mcg_cutoff = 0.5
mcg_col_name = 'mCGFrac' # Name may change
```
## Load metadata
```
metadata = pd.read_csv(metadata_path, index_col=0)
total_cells = metadata.shape[0]
print(f'Metadata of {total_cells} cells')
metadata.head()
```
## Filter by key mapping metrics
### Bismark Mapping Rate
- Low mapping rate indicates potential contamination.
- Usually R1 mapping rate is 8-10% higher than R2 mapping rate for snmC based technologies, but they should be highly correlated. Here I am using the combined mapping rate. If you are using the R1MappingRate or R2MappingRate, change the cutoff accordingly.
- Usually there is a peak on the left, which corresponding to the empty wells.
```
_cutoff = mapping_rate_cutoff
_col_name = mapping_rate_col_name
# plot distribution to make sure cutoff is appropriate
g = sns.displot(metadata[_col_name], binrange=(0, 1))
g.ax.plot((_cutoff, _cutoff), g.ax.get_ylim(), c='r', linestyle='--')
mapping_rate_judge = metadata[_col_name] > _cutoff
_passed_cells = mapping_rate_judge.sum()
print(
f'{_passed_cells} / {total_cells} cells ({_passed_cells / total_cells * 100:.1f}%) '
f'passed the {_col_name} cutoff {_cutoff}.')
```
### Final Reads
- The cutoff may change depending on how deep the library has been sequenced.
- Usually there is a peak on the left, which corresponding to the empty wells.
- There are also some cells having small number of reads, these wells may lost most of the DNA during library prep. Cells having too less reads can be hard to classify, since the methylome sequencing is an untargeted whole-genome sequencing.
```
_cutoff = final_reads_cutoff
_col_name = final_reads_col_name
# plot distribution to make sure cutoff is appropriate
g = sns.displot(metadata[_col_name], binrange=(0, 5e6))
g.ax.plot((_cutoff, _cutoff), g.ax.get_ylim(), c='r', linestyle='--')
final_reads_judge = metadata[_col_name] > _cutoff
_passed_cells = final_reads_judge.sum()
print(
f'{_passed_cells} / {total_cells} cells ({_passed_cells / total_cells * 100:.1f}%) '
f'passed the {_col_name} cutoff {_cutoff}.')
```
### mCCC / CCC
- The mCCC fraction is used as the proxy of the upper bound of the non-conversion rate for cell-level QC. The methylation level at CCC sites is the lowest among all of the different 3 base-contexts (CNN), and, in fact, it is very close to the unmethylated lambda mC fraction.
- However, mCCC fraction is correlated with mCH (especially in brain data), so you can see a similar shape of distribution of mCCC and mCH, but the range is different.
```
_cutoff = mccc_cutoff
_col_name = mccc_col_name
# plot distribution to make sure cutoff is appropriate
g = sns.displot(metadata[_col_name], binrange=(0, 0.05))
g.ax.plot((_cutoff, _cutoff), g.ax.get_ylim(), c='r', linestyle='--')
mccc_judge = metadata[_col_name] < _cutoff
_passed_cells = mccc_judge.sum()
print(
f'{_passed_cells} / {total_cells} cells ({_passed_cells / total_cells * 100:.1f}%) '
f'passed the {_col_name} cutoff {_cutoff}.')
```
### mCH / CH
- Usually failed cells (empty well or contaminated) tend to have abormal methylation level as well.
```
_cutoff = mch_cutoff
_col_name = mch_col_name
# plot distribution to make sure cutoff is appropriate
g = sns.displot(metadata[_col_name], binrange=(0, 0.3))
g.ax.plot((_cutoff, _cutoff), g.ax.get_ylim(), c='r', linestyle='--')
mch_judge = metadata[_col_name] < _cutoff
_passed_cells = mch_judge.sum()
print(
f'{_passed_cells} / {total_cells} cells ({_passed_cells / total_cells * 100:.1f}%) '
f'passed the {_col_name} cutoff {_cutoff}.')
```
### mCG
- Usually failed cells (empty well or contaminated) tend to have abormal methylation level as well.
```
_cutoff = mcg_cutoff
_col_name = mcg_col_name
# plot distribution to make sure cutoff is appropriate
g = sns.displot(metadata[_col_name], binrange=(0.3, 1))
g.ax.plot((_cutoff, _cutoff), g.ax.get_ylim(), c='r', linestyle='--')
mcg_judge = metadata[_col_name] > _cutoff
_passed_cells = mcg_judge.sum()
print(
f'{_passed_cells} / {total_cells} cells ({_passed_cells / total_cells * 100:.1f}%) '
f'passed the {_col_name} cutoff {_cutoff}.')
```
## Combine filters
```
judge = mapping_rate_judge & final_reads_judge & mccc_judge & mch_judge & mcg_judge
passed_cells = judge.sum()
print(
f'{passed_cells} / {total_cells} cells ({passed_cells / total_cells * 100:.1f}%) '
f'passed all the filters.')
```
## Sanity Test
```
try:
assert (passed_cells / total_cells) > 0.6
except AssertionError as e:
e.args += (
'A large amount of the cells do not pass filter, check your cutoffs or overall dataset quality.',
)
raise e
try:
assert passed_cells > 0
except AssertionError as e:
e.args += ('No cell remained after all the filters.', )
raise e
print('Feel good')
```
## Save filtered metadata
```
metadata_filtered = metadata[judge].copy()
metadata_filtered.to_csv('CellMetadata.PassQC.csv.gz')
metadata_filtered.head()
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
```
`cd /content/drive/My\ Drive/Transformer-master/` -> `cd /content/drive/My\ Drive/Colab\ Notebooks/Transformer`
```
cd /content/drive/My\ Drive/Colab\ Notebooks/Transformer
```
# ライブラリ読み込み
```
!apt install aptitude
!aptitude install mecab libmecab-dev mecab-ipadic-utf8 git make curl xz-utils file -y
!pip install mecab-python3==0.6
import numpy as np
import os
import time
import MeCab
import preprocess_utils
import model
import weight_utils
import tensorflow.keras as keras
import tensorflow as tf
print(tf.__version__)
```
# 日英翻訳データ ダウンロード
```
# !wget http://www.manythings.org/anki/jpn-eng.zip
# !unzip ./jpn-eng.zip
```
# データ読み込み
corpus_path = './jpn.txt' -> corpus_path = './DATA/kesen3_ex.tsv'
```
dataset = preprocess_utils.CreateData(
corpus_path = './DATA/kesenngo.tsv',
do_shuffle=True,
seed_value=123,
split_percent=0.95 # 学習データの割合
)
train_source, train_target, test_source, test_target, train_licence, test_licence = dataset.split_data()
print('**** Amount of data ****')
print('train_source: ', len(train_source))
print('train_target: ', len(train_target))
print('test_source: ', len(test_source))
print('test_target: ', len(test_target))
print('\n')
print('**** Train data example ****')
print('Source Example: ', train_source[0])
print('Target Example: ', train_target[0])
print('Licence: ', train_licence[0])
print('\n')
print('**** Test data example ****')
print('Source Example: ', test_source[0])
print('Target Example: ', test_target[0])
print('Licence: ', test_licence[0])
import pandas as pd
import re
import codecs
import copy
corpus_path = './DATA/Kesennuma.csv'
df = pd.read_csv(corpus_path)
print('**** Amount of data ****')
print(df)
print('\n')
print('**** Amount of data ****')
#for index, row in df.iterrows():
#print(row['項目名'])
```
# 前処理
```
BATCH_SIZE = 64 # バッチサイズ
MAX_LENGTH = 60 # シーケンスの長さ
USE_TPU = False # TPUを使うか
BUFFER_SIZE = 50000
train_dataset = preprocess_utils.PreprocessData(
mecab = MeCab.Tagger("-Ochasen"),
source_data = train_source,
target_data = train_target,
max_length = MAX_LENGTH,
batch_size = BATCH_SIZE,
test_flag = False,
train_dataset = None,
)
train_dataset.preprocess_data()
if USE_TPU:
tpu_grpc_url = "grpc://" + os.environ["COLAB_TPU_ADDR"]
tpu_cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_grpc_url)
tf.config.experimental_connect_to_cluster(tpu_cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(tpu_cluster_resolver)
strategy = tf.distribute.experimental.TPUStrategy(tpu_cluster_resolver)
trainset = tf.data.Dataset.from_tensor_slices((train_dataset.source_vector, train_dataset.target_vector))
trainset = trainset.map(lambda source, target: (tf.cast(source, tf.int64), tf.cast(target, tf.int64))).shuffle(buffer_size=BUFFER_SIZE).batch(BATCH_SIZE).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
if USE_TPU:
trainset = strategy.experimental_distribute_dataset(trainset)
```
# モデル定義
```
num_layers=4 # レイヤー数
d_model=64 # 中間層の次元数
num_heads=4 # Multi Head Attentionのヘッド数
dff=2048 # Feed Forward Networkの次元数
dropout_rate = 0.1 # ドロップアウト率
source_vocab_size = max(train_dataset.source_token.values()) + 1 # source文の語彙数
target_vocab_size = max(train_dataset.target_token.values()) + 1 # target文の語彙数
# 重み初期化
def initialize_weight(checkpoint_path, optimizer, transformer, max_length, batch_size, use_tpu=False):
if os.path.exists(checkpoint_path+'.pkl'):
if use_tpu:
number_of_tpu_cores = tpu_cluster_resolver.num_accelerators()['TPU']
initialize_source, initialize_target = [[1]*max_length]*number_of_tpu_cores, [[1]*max_length]*number_of_tpu_cores
initialize_set = tf.data.Dataset.from_tensor_slices((initialize_source, initialize_target))
initialize_set = initialize_set.map(lambda source, target: (tf.cast(source, tf.int64), tf.cast(target, tf.int64))
).shuffle(buffer_size=BUFFER_SIZE).batch(batch_size).prefetch(
buffer_size=tf.data.experimental.AUTOTUNE
)
initialize_set = strategy.experimental_distribute_dataset(initialize_set)
for inp, tar in initialize_set:
distributed_train_step(inp, tar)
else:
initialize_set = tf.ones([batch_size, max_length], tf.int64)
train_step(initialize_set, initialize_set)
try:
weight_utils.load_weights_from_pickle(checkpoint_path, optimizer, transformer)
except:
print('Failed to load checkpoints.')
else:
print('No available checkpoints.')
```
# 学習実行
checkpoints/gpu/model -> /checkpoints_EX/gpu/model
```
# Transformer
transformer = model.Transformer(num_layers, d_model, num_heads, dff,
source_vocab_size, target_vocab_size,
pe_input=source_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
# Learning Rate
learning_rate = model.CustomSchedule(d_model)
# Optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
# Loss
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
# Loss Function
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
# Metrics
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
# Checkpoint
checkpoint_path = "/content/drive/My Drive/Colab Notebooks/Transformer/checkpoints_EX/gpu/model"
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = model.create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
# Initialize Weight
initialize_weight(checkpoint_path, optimizer, transformer, MAX_LENGTH, BATCH_SIZE, use_tpu=USE_TPU)
EPOCHS = 30
batch = 0
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
for inp, tar in trainset:
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
batch+=1
if (epoch + 1) % 5 == 0:
print('Saving checkpoint for epoch {} at {}'.format(epoch+1, checkpoint_path))
weight_utils.save_weights_as_pickle(checkpoint_path, optimizer, transformer)
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import scipy.stats as stats
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import random
import statsmodels.api as sm
sns.set(style="whitegrid")
```
# Example
We're now in a position to return to our housing data for King County, Washington to make some of these more abstract concepts concrete.
```
data = pd.read_csv("../resources/data/kc_house_data.csv")
```
## Waterfront
Let's look at a few variables from that data set. The first one we'll look at is `waterfront`. If you remember, waterfront indicates whether or not the property is on the water. There are two possible outcomes and so this makes the Bernoulli distribution a good model for this data.
```
data["waterfront"].value_counts().sort_index()
```
To set the context, remember that, overall, we have a "home sales process" that we're interested in. The property having a waterfront is a binary feature of each home sale and we're going to model it using a Bernoulli distribution.
The single parameter of the Bernoulli distribution is $p$, the probability of "success". In this case, "success" means "is waterfront". We can estimate $p$ using the Method of Moments:
```
p = np.mean(data["waterfront"])
print("p = ", p)
```
So there's a 0.7% chance that the next house sold is a water front property. Note that we can't say anything about, say, the next house to come up for sale because the data only covers houses sold and not houses offered for sale (people may take their houses off the market if they don't sell or if they decided not to sell).
Also notice how we went from a descriptive statistic (0.7% of the homes sold have a water front) to a model (there is a 0.7% probability that the next home sold will have a water front).
Because we have only limited data from the data generating process, this estimate actually has some uncertainty associated with it. Is it really 0.7% or is it 0.6% or 1.0%? We will address this issue in the next chapter. For now, we're going to take our models at face value.
By turning to modeling, we've opened up a lot more interesting questions we can ask though. How does `waterfront` affect `view`, $P(view|waterfront)$?
```
frequencies = pd.crosstab( data["waterfront"], data["view"]).apply(lambda r: r/r.sum(), axis=1)
print(frequencies)
sns.heatmap( frequencies)
```
This gives us two *multinomial* distributions, $P(view|waterfront=0)$ and $P(view|waterfront=1)$. Our parameters are $p_0$, $p_1$, $p_2$, $p_3$, and $p_4$ for each value of waterfront. Because the sum the parameters must equal one, we don't need to estimate the $p_4$'s directly.
If we have $waterfront=0$, then there's a 90.9% probability you have a "0" view. However, if you have a $waterfront=1$, then there's a 0.0% probability you have a "0" view and an 82.8% probability you have the best view, "4".
So we're pretty much guaranteed a good view if the property is water front. What about the reverse?
```
frequencies = pd.crosstab( data["view"], data["waterfront"]).apply(lambda r: r/r.sum(), axis=1)
print(frequencies)
sns.heatmap( frequencies)
```
Here we have 5 Bernoulli distributions. Given a value for view, we have some probability of being waterfront. If the view is "4", then the probability of waterfront is 42%. Isn't that interesting? While having waterfront nearly guarantees the best view, having the best view doesn't guarantee by a long shot (it's nearly a toss up) having waterfront property.
## Bedrooms
Now let's look at bedrooms in a house. In the EDA chapter, we treated this variable as both a categorical and a numerical variable. However, it seems like "bedrooms per house" makes the Poisson distribution an obvious choice. Using the Poisson distribution will also be good because it lets us estimate the probabilities of room counts we haven't seen. Let's start with that.
Let's get our basic descriptive statistics:
```
data["bedrooms"].describe()
```
According to the previous section, the Method of Moments estimator for the $\lambda$ parameter of the Poisson distribution is $m_1$.
```
from scipy.stats import poisson
proportions = data["bedrooms"].value_counts(normalize=True).sort_index()
xs = range( len( proportions))
width = 1/1.5
lamb = np.mean(data["bedrooms"]) # m1
ys = [poisson.pmf( x, lamb, 0) for x in xs]
figure = plt.figure(figsize=(10, 6))
axes = figure.add_subplot(1, 1, 1)
axes.bar(xs, proportions, width, color="dimgray", align="center")
axes.set_xlabel("Bedrooms")
axes.set_xticks(xs)
axes.set_xticklabels(proportions.axes[0])
axes.set_title( "Relative Frequency of bedrooms")
axes.set_ylabel( "Percent")
axes.xaxis.grid(False)
axes.plot( xs, ys, color="darkred", marker="o")
plt.show()
plt.close()
```
Our first problem is that pesky 33 room house. Aside from that, the Poisson distribution is not a good model for this data. We can see that it severely underestimates the number of 3 and 4 bedroom houses and overestimates the number of 1, 2, and 5 bedroom houses.
Note that this is our general criteria for picking models: does it work. We never ask "is the data normally distributed" because data isn't normally distributed. Normal distributions don't exist out there in the real world. The question is always about modeling, is this a good model?
And that's a good time to note that a multinomial model probably is the better model here:
```
data["bedrooms"].value_counts(normalize=True).sort_index()
```
It's also worth noting that means and rates such as these may be all that you need to do to answer your question or solve your problem. I have often mentioned that in the beginning, such models are all that many organizations need to start.
## Living Space Square Footage
Now we turn our attention to a continuous, numerical variable: `sqft_living`:
```
data["sqft_living"].describe()
```
So we have our first model. The mean is 2079 and we can set our expectation that "on average" (which we now know is a way of saying "to minimize our error") a home that sells next month is likely to have 2079 square feet of living space. Of course, it won't be exact but over the course of all those predictions, our error will be minimized.
In many ways, this is the difference between merely describing data (and just reporting the values) and using data to build models and predicting future values.
If an accountant tells you we had $23,000,000 in purchases last month. That's descriptive. If you take that number and use it as an estimate for next month, that's a model.
We might like to ask more interesting questions than, what is the average square footage of the next house likely to be. We could ask a question like, what is the probability of a house with over 3,000 square feet. This is one of the ways in which distributional modeling is useful. We've already seen this to some degree. What's the probability of a house with 6 rooms? 1.25%.
But first we need to pick a model.
Even before we look at the histogram, the "square foot" should give us pause for thought. It's unlikely that anything involving *squares* is going to result in a Normal distribution because the Normal distribution is usually associated with additive processes.
```
figure = plt.figure(figsize=(10, 6))
axes = figure.add_subplot(1, 1, 1)
axes.hist(data[ "sqft_living"], color="DimGray", density=True)
axes.set_xlabel( "sqft")
axes.set_ylabel( "Density")
axes.set_title("Density Histogram of sqft_living; default bins")
plt.show()
plt.close()
```
Definitely not normally distributed. Let's do two things. First, we want smaller bins so we can see the detail in the data. Second, let's plot a Normal distribution on top of the histogram.
```
from scipy.stats import norm
figure = plt.figure(figsize=(10,6))
axes = figure.add_subplot(1, 1, 1)
n, bins, patches = axes.hist(data[ "sqft_living"], color="DimGray", density=True,bins=20, alpha=0.75)
axes.set_xlabel( "sqft")
axes.set_ylabel( "Density")
axes.set_title("Density Histogram of sqft_living with Normal plot")
xs = [(b2 + b1)/2 for b1, b2 in zip(bins, bins[1:])]
mean = np.mean(data["sqft_living"])
std = np.std(data["sqft_living"])
ys = [norm.pdf( k, loc=mean, scale=std) for k in xs]
axes.plot(xs, ys, color="darkred")
plt.show()
plt.close()
```
Notice that we have to estimate the Normal distribution at the middle of the bins instead of the bin edges.
In general, this is probably a poor model. It strongly overestimates high square footage homes and underestimates low square footage homes. For what it's worth, it is often suggested that you look at *cumulative distributions*, not probability distributions.
```
figure = plt.figure(figsize=(20, 8))
mn = np.min(data["sqft_living"])
mx = np.max(data["sqft_living"])
mean = np.mean( data[ "sqft_living"])
std = np.std( data[ "sqft_living"])
axes = figure.add_subplot(1, 2, 1)
values, base = np.histogram(data[ "sqft_living"], bins=11, density=True)
cumulative = np.cumsum(values)
axes.plot(base[:-1], cumulative, color="steelblue")
axes.set_xlim((mn, mx))
sampled_data = [mean + r * std for r in np.random.standard_normal(10000)]
values2, base = np.histogram(sampled_data, bins=base, density=True)
cumulative2 = np.cumsum(values2)
axes.plot( base[:-1], cumulative2, color="firebrick")
axes.set_xlim((np.min( data[ "sqft_living"]), np.max( data[ "sqft_living"])))
axes.set_xlabel( "Empirical v. Theoretical: Normal Distribution")
axes = figure.add_subplot(1, 2, 2)
differences = cumulative2 - cumulative
axes.plot(base[:-1], differences, color='firebrick')
axes.set_xlim((mn, mx))
axes.hlines(0, 0, 14000, linestyles="dotted")
axes.set_xlabel( "Empirical v. Theoretical: Normal Distribution, Difference")
plt.show()
plt.close()
```
First, it's worth noting that in order to get data from the theoretical distribution I resorted to Monte Carlo simulation:
```
sampled_data = [mean + r * std for r in np.random.standard_normal(10000)]
values2, base = np.histogram(sampled_data, bins=base, density=True)
```
This isn't the only way to do this but it shows you that there are alternative approaches.
Second, we have taken to heart what we learned in the Visualization chapter and we have also plotted the *difference* between the curves because that's actually what we're interested in. While we can see there are differences between the blue and red lines in the left chart, the differences are obvious in the right chart.
You will often see "PP" or "QQ" plots mentioned. They're not my favorite but a QQ-plot plots quantiles of data. For a particular parameterization of a distribution, a certain percent of the data should appear in each quantile. If the reference distribution and empirical distributions are largely the same, they will have the same percentages per quantile.
If you plot the empirical quantiles against the theoretical quantiles (as in a scatter plot), they should appear on or near the $x = y$ line if the reference distribution is a good match for your data.
```
figure = plt.figure(figsize=(6, 6))
axes = figure.add_subplot(1, 1, 1)
stats.probplot(data[ "sqft_living"], dist="norm", plot=axes)
plt.show()
plt.close()
```
The answer is still "no".
Let's see if we can do better. In the discussion about the Normal distribution we noted that we sometimes need to use a Log Normal distribution and based on our EDA from the previous chapter, this is probably where we should have started.
```
data["log_sqft_living"] = data[ "sqft_living"].apply(lambda x: np.log10(x))
figure = plt.figure(figsize=(20, 8))
mn = np.min(data["log_sqft_living"])
mx = np.max(data["log_sqft_living"])
mean = np.mean( data[ "log_sqft_living"])
std = np.std( data[ "log_sqft_living"])
axes = figure.add_subplot(1, 2, 1)
values, base = np.histogram(data[ "log_sqft_living"], bins=11, density=True)
cumulative = np.cumsum(values)
axes.plot(base[:-1], cumulative, color="steelblue")
axes.set_xlim((mn, mx))
sampled_data = [mean + r * std for r in np.random.standard_normal(10000)]
values2, base = np.histogram(sampled_data, bins=base, density=True)
cumulative2 = np.cumsum(values2)
axes.plot( base[:-1], cumulative2, color="firebrick")
axes.set_xlim((np.min( data[ "log_sqft_living"]), np.max( data[ "log_sqft_living"])))
axes.set_xlabel( "Empirical v. Theoretical: Normal Distribution")
axes = figure.add_subplot(1, 2, 2)
differences = cumulative2 - cumulative
axes.plot(base[:-1], differences, color='firebrick')
axes.set_xlim((mn, mx))
axes.hlines(0, 0, 14000, linestyles="dotted")
axes.set_xlabel( "Empirical v. Theoretical: Normal Distribution, Difference")
plt.show()
plt.close()
```
Unfortunately, changing to the log scale, changed the density scale and we can't compare what the error actually is here on the right. On the left, this fit looks perfect. Let's switch back to a PDF (probability density function):
```
figure = plt.figure(figsize=(10,6))
axes = figure.add_subplot(1, 1, 1)
n, bins, patches = axes.hist(data[ "log_sqft_living"], color="DimGray", density=True,bins=20, alpha=0.75)
axes.set_xlabel( "sqft")
axes.set_ylabel( "Density")
axes.set_title("Density Histogram of log_sqft_living with Normal plot")
xs = [(b2 + b1)/2 for b1, b2 in zip(bins, bins[1:])]
mean = np.mean(data["log_sqft_living"])
std = np.std(data["log_sqft_living"])
ys = [norm.pdf( k, loc=mean, scale=std) for k in xs]
axes.plot(xs, ys, color="darkred")
plt.show()
plt.close()
```
So what can we do with this model?
Suppose we want to know the probability of a house that sells in the next month has 3000 or more feet.
1\. What is the log of 3000?
```
np.log10(3000)
```
2\. We're dealing with probability densities in continuous distributions. There are three basic questions we can ask:
1. What is the probability of x or larger (use Survival Function).
2. What is the probability of x or less (use CDF).
3. What is the probability of x to y (CDF - CDF).
Our current question is the first one:
```
stats.norm.sf(np.log10(3000),mean,std)
```
There is a 14.2% probability that a sold home has a square footage of 3,000 or greater.
What about less than 1,200 square feet?
```
stats.norm.cdf(np.log10(1200),mean,std)
```
There is a 13.9% probability that a sold home has square footage of 1,000 or less.
What about between 2,000 and 3,000 square feet?
```
stats.norm.cdf(np.log10(3000),mean,std) - stats.norm.cdf(np.log10(2000),mean,std)
```
There is a 31.1% probability that the square footage of that sold home is between 2,000 and 3,000 square feet.
These kinds of distributional models are incredibly useful. I started to implement such a model at a company I worked at.
We had a workflow environment (like Oozie or Airflow) that ran jobs for us every day. Unfortunately, some of the jobs would get stuck. If you restart a job, it has to start all over again and that could mean repeating some lengthy computations. The goal was to model the duration of workflows so that you could say "warn me if the execution time of this workflow exceeds 10%".
In other words, the monitor could poll a workflow and determine the probability that it would take the "time so far" to complete. Workflows are a good example of Exponentially distributed data (actually, Shifted Exponential). They're likely to take around some amount of time (45 minutes) but less likely to take much more than that amount of time.
If the current running time was, say, 67 minutes and the probability of taking that long was less than 10%, I could be warned that a workflow might be stuck.
One advantage to models is that if you use only the empirical distribution, you may not account for values that haven't been observed. Additionally, your model is only as good as your data.
One of the criticisms of the Fukushima Daiichi reactor in Japan was that it was built to withstand the largest recorded earthquake in Japan. That's using the empirical distribution. However, a model of earthquakes suggested a stronger earthquake was possible and there had been larger earthquakes recorded elsewhere in the world. Both of these suggest that the empirical distribution was a bad modeling choice.
Finally, we reiterate that our model parameters are all based on a single sample. We'll see shortly how to deal with that issue using Bayesian inference.
| github_jupyter |
```
# i 可能的取值:0、2、4、6、len(A)
from collections import Counter
class Solution:
def canReorderDoubled(self, A):
if not A: return True
a_freq = Counter(A)
seen = set()
for a in A:
if a in seen: continue
if a_freq[a] == 0:
seen.add(a)
continue
if a_freq[a * 2] >= a_freq[a] and a * 2 not in seen:
a_freq[a * 2] -= a_freq[a]
elif a % 2 == 0 and a_freq[a // 2] >= a_freq[a] and a // 2 not in seen:
a_freq[a // 2] -= a_freq[a]
else:
return False
return True
from collections import Counter
class Solution:
def canReorderDoubled(self, A):
if not A: return True
a_freq = Counter(A)
for a in sorted(a_freq.keys(), key=abs):
if a_freq[a] == 0:
continue
if a == 0 and a_freq[0] % 2 == 0:
a_freq[0] = 0
continue
if a_freq[a * 2] > 0:
min_val = min(a_freq[a * 2], a_freq[a])
a_freq[a * 2] -= min_val
a_freq[a] -= min_val
return all(not v for v in a_freq.values())
solution = Solution()
solution.canReorderDoubled([-6,2,-6,4,-3,8,3,2,-2,6,1,-3,-4,-4,-8,4])
from collections import Counter
class Solution:
def canReorderDoubled(self, A):
a_freq = Counter(A)
for n in sorted(a_freq.keys(), key=abs):
double = 2 * n
while a_freq[n] > 0 and a_freq[double] > 0:
double = 2 * n
a_freq[n] -= 1
a_freq[double] -= 1
return all(not v for v in a_freq.values())
solution = Solution()
solution.canReorderDoubled([-6,2,-6,4,-3,8,3,2,-2,6,1,-3,-4,-4,-8,4])
if 0:
print(2)
class Solution:
def canReorderDoubled(self, A: List[int]) -> bool:
c = Counter(A)
for n in sorted(c.keys(), key=abs):
while c[n] > 0 and c[(double := 2 * n)] > 0:
c[n] -= 1
c[double] -= 1
return all(not v for v in c.values())
from collections import Counter
class Solution:
def canReorderDoubled(self, A):
if not A: return True
a_freq = Counter(A)
for a in sorted(a_freq.keys(), key=abs):
if a_freq[a] == 0:
continue
if a == 0 and a_freq[0] % 2 == 0:
a_freq[0] = 0
continue
if a_freq[a * 2] > 0:
min_val = min(a_freq[a * 2], a_freq[a])
a_freq[a * 2] -= min_val
a_freq[a] -= min_val
return all(not v for v in a_freq.values())
solution = Solution()
solution.canReorderDoubled([-6,2,-6,4,-3,8,3,2,-2,6,1,-3,-4,-4,-8,4])
```
| github_jupyter |
# Recommendations with IBM
In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
## Table of Contents
I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
II. [Rank Based Recommendations](#Rank)<br>
III. [User-User Based Collaborative Filtering](#User-User)<br>
IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
V. [Matrix Factorization](#Matrix-Fact)<br>
VI. [Extras & Concluding](#conclusions)
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
import seaborn as sns
from scipy import stats
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
%matplotlib inline
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('abc')
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
# inspect the first row
df.iloc[0]['title']
# show df_content to get an idea of the data
df_content.head()
print(df_content.iloc[0])
```
### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
```
# group by email
def group_by_title(df, column = "email"):
"""group the user-item interactions dataframe by column.
Args:
df (Dataframe): a dataframe object
column (string): column to group by
per column.
Returns:
Dataframe: dataframe of user article interaction counts sorted in descending order
"""
df_title_counts = df.groupby(['email']).size().reset_index(name='counts')
df_title_counts = df_title_counts.sort_values(by=['counts'], ascending=False)
return df_title_counts
user_article_counts = group_by_title(df)
user_article_counts.head()
print( "The number of articles are {} and users interacted {} times these articles" \
.format(user_article_counts.shape[0], user_article_counts["counts"].sum()))
def histogram (df, column = "counts", title="Distribution of user articles interactions"):
"""Create a distribution of user article interactions.
Args:
df (Dataframe): a dataframe object
column (string): column that holds article counts
title (string): the title of the distribution chart
Returns:
figure: a matplotlib distribution figure of article counts
"""
sns.set(color_codes=True)
plt.figure(figsize=(15,8))
sns.distplot(df[column], kde=False, hist_kws=dict(edgecolor="k", linewidth=2))
plt.xlabel(title)
plt.ylabel('Frequency');
histogram(user_article_counts)
# fill in the median and maximum number of user_article interactios below
median_val = user_article_counts["counts"].median() # 50% of individuals interact with 3 number of articles or fewer.
max_views_by_user = user_article_counts["counts"].max() # The maximum number of user-article interactions by any 1 user is 364.
```
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
```
# find and explore duplicate articles
df_content.info()
# get duplicate articles
df_content[df_content.duplicated(subset="article_id")]
# remove any rows that have the same article_id - only keep the first
df_content_clean = df_content.drop_duplicates(subset="article_id")
assert df_content_clean.shape[0] + 5 == df_content.shape[0]
```
`3.` Use the cells below to find:
**a.** The number of unique articles that have an interaction with a user.
**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
**c.** The number of unique users in the dataset. (excluding null values) <br>
**d.** The number of user-article interactions in the dataset.
```
# the number of unique articles that have an interaction with a user.
df.article_id.nunique()
# the number of unique articles in the dataset (whether they have any interactions or not).
df_content_clean.article_id.nunique()
# the number of unique users in the dataset. (excluding null values)
df.email.nunique()
# the number of user-article interactions in the dataset.
df.shape[0]
unique_articles = 714 # The number of unique articles that have at least one interaction
total_articles = 1051 # The number of unique articles on the IBM platform
unique_users = 5148 # The number of unique users
user_article_interactions = 45993 # The number of user-article interactions
```
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
```
# most viewed article
article_counts = df.groupby(['article_id']).count()
article_counts['email'].max()
article_counts.sort_values("email", ascending=False).iloc[0,:]
most_viewed_article_id = "1429.0" # the most viewed article in the dataset as a string with one value following the decimal
max_views = 937 # the most viewed article in the dataset was viewed how many times?
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
```
### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
```
df.head()
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Your code here
top_articles = df.groupby(['article_id', 'title']).size()\
.reset_index(name='counts').sort_values('counts', ascending=False)[:n].title.tolist()
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article ids
'''
# Your code here
top_articles = df.groupby("article_id").count()["title"].sort_values(ascending=False).index[:n].astype('str')
return top_articles.tolist() # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
```
### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
* Each **user** should only appear in each **row** once.
* Each **article** should only show up in one **column**.
* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
```
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
# Fill in the function here
# unstack the user-item interaction dataframe
user_item = df.drop_duplicates().groupby(['user_id', 'article_id']).size().unstack()
# fill missing values with 0
user_item = user_item.fillna(0)
# convert int
user_item = user_item.astype('int')
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
```
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
```
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# compute similarity of each user to the provided user
user_similr = user_item.loc[user_id,:].dot(user_item.T)
# sort by similarity
user_similr = user_similr.sort_values(ascending=False)
# create list of just the ids
# remove the own user's id
most_similar_users = user_similr.loc[~(user_similr.index==user_id)].index.values.tolist()
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
```
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
```
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
# Your code here
article_names = []
# select articles with the same article_id and drop duplicates
article_names = df[df['article_id'].isin(article_ids)]['title'].drop_duplicates().values.tolist()
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# Your code here
user_idx = user_item.loc[user_id, :] #get all articles for this user id
article_ids = user_idx[user_idx == 1].index.values.astype('str').tolist() #get articles user interacted with
article_names = get_article_names(article_ids) # get article names
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# Your code here
most_similar_users = find_similar_users(user_id) # get most similar users
user_article_ids = set(get_user_articles(user_id)[0]) # get article ids
recs = []
# create recommendations for this user
for user_neighb in most_similar_users:
neighb_article_ids = set(get_user_articles(user_neighb)[0])
recs += list(set(neighb_article_ids) - set(user_article_ids))
if len(recs) > m:
break
recs = recs[:m]
return recs # return your recommendations for this user_id
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
```
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
```
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# Your code here
colName = ['neighbor_id', 'similarity', 'num_interactions'] # column names
neighbors_df = pd.DataFrame(columns= colName) # create dataframe to hold top users
# populate the dataframe
for id in user_item.index.values:
if id != user_id:
neighbor_id = id
# get user to user similarity
similarity = user_item[user_item.index == user_id].dot(user_item.loc[id].T).values[0]
# get number of interactions for user ot article
num_interactions = user_item.loc[id].values.sum()
neighbors_df.loc[neighbor_id] = [neighbor_id, similarity, num_interactions]
neighbors_df['similarity'] = neighbors_df['similarity'].astype('int')
neighbors_df['neighbor_id'] = neighbors_df['neighbor_id'].astype('int')
neighbors_df = neighbors_df.sort_values(by = ['similarity', 'neighbor_id'], ascending = [False, True])
return neighbors_df # return the dataframe
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# Your code here
# get similar users
neighbours = get_top_sorted_users(user_id)
top_similar_users = neighbours['neighbor_id'].values.tolist()
recs = [] # recommended article Id's
# get articles read by the user
user_article_ids = list(set(get_user_articles(user_id)[0]))
for neighbour_id in top_similar_users:
recs += df[df['user_id'] == neighbour_id]['article_id'].values.tolist()
recs = list(set(recs))
# selecting articles not seen by User_id
recs = [ x for x in recs if x not in user_article_ids]
recs_df = df[df.article_id.isin(recs)][['article_id', 'title']].drop_duplicates().head(m)
recs = recs_df['article_id'].values.tolist() # get ids
rec_names = recs_df['title'].values.tolist() # get title
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
```
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
```
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).iloc[0].neighbor_id # Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).iloc[9].neighbor_id # Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
```
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
**Provide your response here.**
I will choose ```user_user_recs_part2```. It's a good start to recommend articles from the most active users and make sure these articles are the most interacted articles as well. For new users, we can ask them about their preferences, then recommend top articles that are matching this preference. Once we have more data on them, we can move to matrix factorization.
`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
```
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = get_top_article_ids(10) # Your recommendations here
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
```
### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
def tokenize(x):
'''
Tokenize a string into words.
Args:
x(string): string to tokezine.
Returns:
(list): list of lemmatized words
'''
# get stop words
stop_words = (set(stopwords.words('english')) | set(nltk.corpus.abc.words()))
tokens = word_tokenize(x) # split each article title into individual words
lemmatizer = WordNetLemmatizer()
clean_tokens=[]
for token in tokens:
#clean each token from whitespace and punctuation, and conver to root word
clean_token = lemmatizer.lemmatize(token).lower().strip()
clean_tokens.append(clean_token)
filtered = [word for word in clean_tokens if word not in stop_words and word.isalpha()]
return filtered
def make_content_recs(data_id, user_id=True, m=10, df=df):
'''
This recommender goes through each article title and nltk library to finds the most common words
(related to content) throughout all the articles.
The recommender will look at the sums of words in the title of each article
and based on the number of matches and popularity of an article.
Args:
data_id (str) - id of either user or article
user_id (bool) - if true, make recs based on user
m (int) - number of recommendations to give based on term
Returns:
recs (list) - list of article ids that are recommended
rec_names (list) - list of article names that are recommended
'''
if(user_id):
user_id = data_id
try:
# get past articles read by the user
article_ids, _ = get_user_articles(user_id)
except KeyError: # user does not exist
print('User Doesn\'t Exist, Recommending Top Articles')
recs = get_top_article_ids(m)
return recs, get_article_names(recs)
else:
article_ids = data_id
title_data = df.drop_duplicates(subset='article_id') #drop duplicates
titles = title_data[title_data.article_id.isin(list(map(float, article_ids)))].title # get article titles
#tokenize the words in each article title
title_words=[]
tokenized = tokenize(titles.str.cat(sep=' '))
title_words.extend(tokenized)
#find the highest occuring words
common_words = pd.value_counts(title_words).sort_values(ascending=False)[:10].index
top_matches={}
# measure of similarity: count number of occurences of each common word in other article titles
for word in common_words:
word_count = pd.Series(title_data.title.str.count(word).fillna(0)) #gets occurences of each word in title
top_matches[word] = word_count
# most common words
top_matches = pd.DataFrame(top_matches)
top_matches['top_matches'] = top_matches.sum(axis=1)
top_matches['article_id'] = title_data.article_id.astype(float)
# get most interacted with articles
article_occurences = pd.DataFrame({'occurences':df.article_id.value_counts()})
# sort matches by most popular articles
top_matches = top_matches.merge(article_occurences, left_on='article_id', right_index=True)
top_matches.sort_values(['top_matches', 'occurences'], ascending=False, inplace=True)
# drop already read articles
recs_df = top_matches[~top_matches.article_id.isin(list(map(float, article_ids)))]
# get rec id and names
recs = recs_df.article_id[:m].values.astype(str)
rec_names = get_article_names(recs)
return recs, rec_names
```
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
This content based recommender scans through previously interacted articles. The nltk library finds the most common words in the titles of each article.
Based on these most common words, the recommender looks at the sums of words relevant words in the title of each article, and based on the number of matches in the titles as well as the general popularity of the article it gives back the best recommendations.
If the user has not read any articles yet, then we can't really give any content based recommendations, and just return back some of the most popular articles.
There is a lot of potential improvement and optimization for this recommender. For example one could construct a custom NLTK corpus which would filter out article words. Currently I use a combination of a couple standard NLTK corpora. Furthermore, If df_content had information for all articles we could expand this recommender to look through not only the title but also the body of the articles.
**Write an explanation of your content based recommendation system here.**
`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
We are using the NLTK library to search for articles with similar keywords. If the user has no history yet, then no content-based recommendation is given, and we will return some of the most popular articles.
We can improve this further by looking at similar keywords but semantically not just the exact keyword. Also, if the user doesn't like a specific article because it has deep learning contents, it doesn't mean that he or she will disklike every article with deep learning content. It will be interesting to augment content-based recommendation with some ML algorithm that can handle such situations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
# make recommendations for a brand new user
make_content_recs('0.0', user_id=True)
# make a recommendations for a user who only has interacted with article id '1427.0'
make_content_recs(['1427.0'], user_id=False)
```
### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
```
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
```
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
```
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix) # use the built in to get the three matrices
print("Number of Nans in the users to item interactions matrix is: {}".format(np.isnan(user_item_matrix).sum().sum()))
print("Number of Nans in the users to latent features matrix is: {}".format(np.isnan(u).sum().sum()))
print("Number of Nans in the segma matrix is: {}".format(np.isnan(s).sum().sum()))
print("Number of Nans in the items to latent features matrix is: {}".format(np.isnan(vt).sum().sum()))
```
**Provide your response here.**
We can use Singular Value Decomposition because **there are no missing values (NANs) in our data.**
`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
```
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.figure(figsize=(15,10))
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
```
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
* How many users can we make predictions for in the test set?
* How many users are we not able to make predictions for because of the cold start problem?
* How many articles can we make predictions for in the test set?
* How many articles are we not able to make predictions for because of the cold start problem?
```
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# Your code here
# create user item matrix for the train dataset
user_item_train = create_user_item_matrix(df_train)
# create the test dataset
user_item_test = create_user_item_matrix(df_test)
# get the ids of the train dataset and test dataset
train_idx = set(user_item_train.index)
test_idx = set(user_item_test.index)
# get shared rows
shared_rows = train_idx.intersection(test_idx)
# get columns in train and test datasets
train_arts = set(user_item_train.columns)
test_arts = set(user_item_test.columns)
# get shared columns
shared_cols = train_arts.intersection(test_arts)
# Creating new user-item matrix for tets with common values
user_item_test = user_item_test.ix[shared_rows, shared_cols]
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
print(user_item_test.shape[0])
print(len(test_idx) - user_item_test.shape[0])
print(user_item_test.shape[1])
print(len(test_arts) - user_item_test.shape[1])
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,
'How many movies can we make predictions for in the test set?': b,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d
}
# this should be article not movies. it was bugging me and wasted some time on it
t.sol_4_test(sol_4_dict)
```
`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
```
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below
# Use these cells to see how well you can use the training
# decomposition to predict on test data
def svd_algorithm(u_train, s_train, vt_train):
""" Return the results of the svd algorithm.
Args:
u_train (np.array): user item interaction matrix
s_train (np.array): sigma matrix
vt_train (np.array): v transpose matrix
Returns:
Dataframe: dataframe of user article interaction counts sorted in descending order
"""
num_latent_feats = np.arange(10,700+10,20)
sum_errs_train = []
sum_errs_test = []
all_errs = []
for k in num_latent_feats:
# ge u_test and vt_test
row_idxs = user_item_train.index.isin(test_idx)
col_idxs = user_item_train.columns.isin(test_arts)
u_test = u_train[row_idxs, :]
vt_test = vt_train[:, col_idxs]
# split data
s_train_lat, u_train_lat, vt_train_lat = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]
u_test_lat, vt_test_lat = u_test[:, :k], vt_test[:k, :]
# dot product:
user_item_train_preds = np.around(np.dot(np.dot(u_train_lat, s_train_lat), vt_train_lat))
user_item_test_preds = np.around(np.dot(np.dot(u_test_lat, s_train_lat), vt_test_lat))
all_errs.append(1 - ((np.sum(user_item_test_preds)+np.sum(np.sum(user_item_test))) \
/(user_item_test.shape[0]*user_item_test.shape[1])))
# calculate the error of each prediction
diffs_train = np.subtract(user_item_train, user_item_train_preds)
diffs_test = np.subtract(user_item_test, user_item_test_preds)
# get total Error
err_train = np.sum(np.sum(np.abs(diffs_train)))
err_test = np.sum(np.sum(np.abs(diffs_test)))
sum_errs_train.append(err_train)
sum_errs_test.append(err_test)
# plot accuracy for train and test vs number of latent features
plt.figure(figsize=(15,10))
# latent features and training
plt.plot(num_latent_feats, 1 - np.array(sum_errs_train)/(user_item_train.shape[0]*user_item_test.shape[1]), label='Train', color='darkred')
# latent features and testing
plt.plot(num_latent_feats, 1 - np.array(sum_errs_test)/(user_item_test.shape[0]*user_item_test.shape[1]), label='Test', color='darkblue')
plt.plot(num_latent_feats, all_errs, label='Total Error', color = "orange")
plt.xlabel('Number of Latent Features')
plt.ylabel('Accuracy')
plt.legend();
# call the svd algorithm
svd_algorithm(u_train, s_train, vt_train)
```
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
**Your response here.**
- Test accuracy decreases as the number of latent features increases for the testing dataset.
- In this project, only 20 users had records of old interactions.
- To solve the cold-start problem, we can deploy rank based recommendation method or content based recommendation.
<a id='conclusions'></a>
### Extras
Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
## Conclusion
> Congratulations! You have reached the end of the Recommendations with IBM project!
> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
## Directions to Submit
> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
```
| github_jupyter |
# Part 1: Data Wrangling
## Introduction
This project is a self-made end to end machine learning project in which I scrape a website called 'Jendela 360'. The scraped dataset is saved in a csv file named 'Apartment Data Raw'. The dataset contains the details of apartment units available to be rented in Jakarta and its surrouding (Jabodetabek region) on December 2nd, 2020. The data discussed here might not be up-to-date.
Problem Statement of this project:
"Based on the scraped data of apartments in Jakarta and its surrounding, the writer aims to construct a machine learning model to predict the annual rent price of apartment units. If possible, the writer aims to find which feature/factors has the most immpact on an apartment unit's annual rent price."
In the first notebook, we are going to load the raw dataset and conduct data wrangling to draw insights and clean the data. Our goal is to have a cleaned dataset at the end of this notebook, so we can use the cleaned data to create and test regression models in the second notebook.
Last but not least, this project is non-profit and made for learning purposes only.
## Importing Packages
```
# Essentials
import numpy as np
import pandas as pd
import datetime
import random
# Plots
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Importing Dataset
```
raw_df = pd.read_csv('Web Scraping/Apartment Dataset Raw.csv')
#We are going to save the unaltered dataset as raw_df, and use the dataframe 'df' to do the next data wrangling operations
df = raw_df
df.head()
df = df.rename({'Unnamed: 0' : 'Index'}, axis = 'columns')
```
## Data Cleaning
### Raw Data Shape and Column Description
```
print(df.columns)
df.shape
```
Each row represents a unique unit of apartment which was displayed on Jendela 360 for rent on 18th October 2020. We have 5339 rows and 47 columns. The columns represent various characteristics of each unit, and is described as follows.
The following columns describe the identification data of each unit (location, name, etc).
* Index: the index of each row (self-ecplanatory) starting at 0.
* URL: the URL each apartment unit page on Jendela 360 website.
* Unit_Name: the apartment unit name on its page.
* Unit_ID: the ID of each page (the last seven characters of the URL). Unique for each apartment unit.
* Apt_Name: the apartment building name of the unit.
* Street: the street address of the unit.
* Locality: the local district of the unit.
* Region: the city of the unit.
* Longitude and Latitude: the geographical longitude and latitude coordinate of the unit
* Floor: the floor location of the unit.
* Tower: the name of the tower in which the unit is located in.
The following columns describe the facilities of each apartment unit. The two columns which houses numerical (quantitative) data about each apartment unit's facilities are:
* No_Rooms: the number of bedrooms in each apartment unit.
* Area: the area in meter suqared of each apartment unit.
The other columns which describe the facilities of each unit are categorical in nature. The value of each column is '1' if the facility is present, and '0' if the facility is not present. These columns are:
* Furnished (1 represents that the unit is fully furnished, and vice versa)
* AC
* Water_Heater
* Dining_Set
* Electricity
* Bed
* Access_Card
* Kitchen
* Fridge
* Washing_Machine
* TV
* ATM
* TV_Cable
* Grocery
* Internet
* Swim_Pool (swimming pool)
* Laundry
* Security
* Basketball (basketball field)
* Multipurpose_room
* Gym
* Jogging (jogging track)
* Tennis (tennis field)
* Restaurant
* Playground
The following columns describe the fee of each unit. The only fee that each apartment has is the annual rent price. Not all apartment units are available to be rented on a monthly term. There are also cases where the deposit and service charges are not listed. Furthermore, it will be very easy to predict the annual price if we know the monthly price, as we just need to multiply it by 12. That's why we are going to remove every fee column in the dataset and only take the annual rent price (in rupiah) as the dependent variable of our model.
* Currency: the currency unit of the listed price.
* Monthly_Price: the monthly payment fee if the tenant wishes to rent it on monthly term.
* Annual_Price: the annual payment fee if the tenant wishes to rent it on yearly term.
* Deposit_Currency: the currency unit of the listed deposit charge.
* Deposit_Charge: the initial deposit charge.
* Service_Currency: the currency unit of the service charge.
* Service_Charge: the service charge of the unit.
### Omiting ERROR Rows
The web scraper uses a ```try:...except:``` block to keep on reading and scraping new pages even if the current iteration raises an error. This is done so the scraping process could be automated, and if a web page raises an error, we don't have to restart the scraping process from the beginning again. If a page raises an error, the whole row (except the URL) will be filled with the string 'ERROR'. The best way to find 'ERROR' rows is to find which rows that have an 'ERROR' Apt_Name column, as that is the features that exists in all apartment unit web pages.
In this step, we are going to remove all 'ERROR' rows.
```
df.shape
df = df[df.Apt_Name != 'ERROR']
df = df.reset_index(drop = True, inplace=False)
df.shape
```
We can see that there are 18 rows which are omitted. These rows are the 'ERROR' rows.
### Identifying the Dependent/Outcome Variable
Referring to the initial problem statement of this project, we hereby decide that the annual rent price of the apartment will be our dependent variable for the regression model. Furthermore, we should not look at the values from monthly price, deposit charge, and service charge, as we would like to predict the annual rent price only using the apartment unit's identification data (location) and facilities.
After deciding which variable will be our outcome variable, we should make sure that the annual price data is in the same currency unit. If the currency of the annual rent price is in dollars, we have to convert it to Rupiah.
The assumption used is that 1 USD = 14,700 IDR.
```
df.Currency.value_counts()
```
We see that there are 5200 apartment unit rent prices which are listed in Rupiah, 57 prices which are listed in US Dollars. We need to convert the price of these 57 apartment units from USD to IDR. To convert it, we need to multiply the Annual_Price value by 14700 if the value of Currency equals to 'USD'. However, before doing any of that, we need to make sure that the values in Annual_Price columns are read as numbers by pandas.
```
df.Annual_Price
```
As we can see, the 'Annual_Price' has the data type of object. This means we need to convert it to float first, before multiplying it by 14700 if it is in USD to convert it properly.
```
Rupiah_Annual_Price = list()
currency_changed = 0
for i, price in enumerate(df.Annual_Price):
if df.Currency[i] == 'USD':
Rupiah_Annual_Price.append(float(price)*14700)
currency_changed += 1
else:
Rupiah_Annual_Price.append(float(price))
df['Rupiah_Annual_Price'] = Rupiah_Annual_Price
print(currency_changed)
```
The currency_changed counter is used to tell us how many currency conversion has been done, and we are glad to see that there are 57 currency conversions, which is the same number of 'USD' occurences in the 'Currency' column of our dataset.
Next, we are going to remove the columns which are no longer needed ('Currency', 'Annual_Price' 'Monthly_Price', 'Deposit_Currency', 'Deposit_Charge', 'Service_Currency', 'Service_Charge').
We are then renaming the 'Rupiah_Annual_Price' to 'AnnualPrice'.
```
df = df.drop(['Currency', 'Annual_Price', 'Monthly_Price', 'Deposit_Currency', 'Deposit_Charge', 'Service_Currency',
'Service_Charge'], axis = 'columns')
df = df.rename({'Rupiah_Annual_Price':'AnnualPrice'}, axis = 'columns')
df.head()
```
## Exploratory Data Analysis
```
df.columns
```
In this step, we are going to do some data exploration to gain more insights on our dataset. First, we'll drop columns which we think might not be insightful for our model. We'll drop the 'Street' and 'Tower' column as it's quite difficult to parse and does not supply us with any insightful information. The 'Street' column is irrelevant as we have 'Locality', 'Region', as well as 'Longitude' & 'Latitude' column to draw geospatial insights form. The 'Tower' column is dropped because it's the name of the tower of each unit, and each apartment complex has different tower names. We suspect that the 'Unit_Name' and 'Apt_Name' might be dropped too, but we'll inspect them in a little bit to see if there are any insights we can draw from those columns.
Note: We'll keep the 'URL' and 'Unit_ID' until we finish exploring the data in case we want to check on specific apartment units.
```
df = df.drop(['Street', 'Tower'], axis = 'columns')
```
Next, we are going to inspect the 'Unit_Name' and 'Apt_Name' columns.
```
df[['Apt_Name', 'Unit_Name']].head()
```
It seems that the 'Apt_Name' column just indicates the overall name of our Apartment complex, while the 'Unit_Name' mentions the number of bedrooms, and in some case, the furnished status of the apartment. Interestingly, the furnished status in 'Unit_Name' are divided into three levels: 'Non Furnished', 'Semi Furnished', and 'Fully Furnished'. However, in our 'Furnished' column, there are only two levels: 'Non Furnished' and 'Fully Furnished'.
We can add a new level to our 'Furnished' feature by creating a 'Semi Furnished' level if the 'Unit_Name' of a particular row has the word 'semi' in it. We'll create a new column called 'FurnishedNew' for this feature.
```
FurnishedNew = list()
for i in range(len(df['Index'])):
if df.Furnished[i] == '1':
FurnishedNew.append('Full')
elif df.Furnished[i] == '0':
if 'semi' in df.Unit_Name[i].lower():
FurnishedNew.append('Semi')
else:
FurnishedNew.append('Non')
df['FurnishedNew'] = FurnishedNew
df.FurnishedNew.value_counts()
```
We'll see if this new feature is better than the existing 'Furnished' column. If this feature makes the model worse, then we'll simply use the two level 'Furnished' feature. We'll then drop the 'Apt_Name' and 'Unit_Name' column.
```
df = df.drop(['Unit_Name', 'Apt_Name'], axis = 'columns')
df.head()
```
Next, we are going to analyse each column and see if it is a good feature for our model or not. We will be plotting each feature against the predicted value, the 'AnnualPrice'. While there are other ways to perform feature selection which are relatively more automated, the writer chooses to do this to gain more insights personally on the dataset.
#### Number of Bedrooms
```
bedroom_df = df[['URL','No_Rooms', 'AnnualPrice']]
bedroom_df.No_Rooms.value_counts()
```
The apartment units in our dataset have 0 till 6 'number of bedrooms'. What does '0' number of bedroom means? During the scraping process, the writer discover that studio apartment units are written as having '0' number of bedrooms in the ```.json``` schema of the web page. We can then use ```df.groupby``` to see the average annual rent price of each category.
```
avg_no_rooms = bedroom_df.groupby('No_Rooms')['AnnualPrice'].mean().reset_index().rename({'AnnualPrice':'Average Annual Price'}, axis = 'columns')
print(avg_no_rooms)
avg_no_rooms.plot(x = 'No_Rooms', y = 'Average Annual Price', kind = 'bar', figsize = [5,5])
```
First column and we're already greeted with a surprise. Why is the studio apartment unit's average price higher than the average price of apartment units with 6 bedrooms? This is why exploring our dataset manually, or the way I prefer to say it - 'personally', is important. This data does not match our common sense, and we need to investigate it. The first thing to do in this situation is try to check for outliers.
```
studio_check = bedroom_df['No_Rooms'] == '0'
sns.boxplot(x = bedroom_df[studio_check].AnnualPrice)
```
First, we filter the 'AnnualPrice' column by 'No_Rooms' category. After selecting the annual rent prices of apartment units which are studio-typed, we can draw the boxplot using seaborn and we see there are two outliers. Let's check these out.
```
bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
```
After sorted by Annual Price, the top two apartment units have prices that are clearly beyond what's 'the norm' for studio apartment units. Using ```pd.set_option('display.max_colwidth', None)```, we can get the URL for these two apartment units, and then see for ourselves in their respective page.
```
pd.set_option('display.max_colwidth', None)
bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(2).URL
```
Upon looking at the first link, we see that this 25 meter squared, studio apartment, is priced at fifty four million dollars. I think we can see the problem here. There are a few pages in which the currency used is wrong. Even apartment with 6 bedrooms are not priced fifty four million dollars a year. This unit's price should be fifty four million rupiah.
The second unit in question also shares the problem. This time, the studio apartment is priced at thirty million dollars. We first need to clean this mess before we continue exploring the other columns. Let's also check if other number of bedrooms share the same issue.
```
br2_check = bedroom_df['No_Rooms'] == '2'
sns.boxplot(x = bedroom_df[br2_check].AnnualPrice)
bedroom_df[br2_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
```
Turns out the problem isn't unique to studio apartments. We have to solve this issue first then, and unfortunately this can only be done in a relatively manual manner (checking the URL one by one). I'll get back after resolving this issue.
#### Finding and Fixing Outliers based on Number of Bedrooms
Create boolean identifiers
```
studio_check = bedroom_df['No_Rooms'] == '0'
br1_check = bedroom_df['No_Rooms'] == '1'
br2_check = bedroom_df['No_Rooms'] == '2'
br3_check = bedroom_df['No_Rooms'] == '3'
br4_check = bedroom_df['No_Rooms'] == '4'
br5_check = bedroom_df['No_Rooms'] == '5'
br6_check = bedroom_df['No_Rooms'] == '6'
```
Fix for No_Rooms = '0' (Studio-Type)
```
bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
df.loc[df.Unit_ID == 'sgpa014', 'AnnualPrice'] = 54000000
df.loc[df.Unit_ID == 'pgva007', 'AnnualPrice'] = 30000000
```
Fix for No_Rooms = '1' (One Bedroom)
```
bedroom_df[br1_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
sns.boxplot(x = bedroom_df[br1_check].AnnualPrice)
```
I think the rent price for 1 bedroom appartment units are skewed to the right. None of the five highest apartment units (of one bedroom) have annual rent prices displayed in dollars. However, we're going to remove one point which is the highest priced apartment unit as it's quite far from the rest of the data points.
```
i = df[((df.Unit_ID == 'frrb001'))].index
df = df.drop(i)
```
Fix for 'No_Rooms' = '2' (Two Bedrooms)
```
bedroom_df[br2_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
sns.boxplot(x = bedroom_df[br2_check].AnnualPrice)
df.loc[df.Unit_ID == 'blmc009', 'AnnualPrice'] = 50400000
```
Fix for 'No_Rooms' = '3' (Three Bedrooms)
```
bedroom_df[br3_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
sns.boxplot(x = bedroom_df[br3_check].AnnualPrice)
```
It turns out that the highest bedroom price is still in Rupiah. However, the rightmost data point is considerably far away from the rest of the data points, and we'll consider it as an outlier to be removed.
```
i = df[((df.Unit_ID == 'esdd002'))].index
df = df.drop(i)
```
Fix for 'No_Rooms' = '4' (Four Bedrooms)
```
bedroom_df[br4_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
sns.boxplot(x = bedroom_df[br4_check].AnnualPrice)
```
Although there seems to be two outliers, upon further checking they don't seem to be a case of misused currency. However, those two rightmost points are considerably far away from the rest of the other data points, and we'll consider them as outliers to be removed. These two prices are even higher than apartment units with 6 bedrooms, and do not represent the norm.
```
i = df[((df.Unit_ID == 'pkrf001'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'ppre001'))].index
df = df.drop(i)
```
Fix for 'No_Rooms' = '5' (Five Bedrooms)
```
bedroom_df[br5_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
```
There are no apartment units in our dataset which has five bedrooms. We are not going to remove anything for now.
Fix for 'No_Rooms' = '6' (Six Bedrooms)
```
bedroom_df[br6_check].sort_values(by=['AnnualPrice'], ascending=False).head(5)
```
There is only one aaprtment unit with six bedrooms. We are not going to remove anything for now - however, we might combine the units with 4, 5, and 6 into one category.
```
br456_check = (bedroom_df.No_Rooms== '4') | (bedroom_df.No_Rooms == '5') | (bedroom_df.No_Rooms == '6')
sns.boxplot(x = bedroom_df[br456_check].AnnualPrice)
```
#### Checking on the Updated Dataframe for No_Rooms Feature
```
New_No_Rooms = list()
for i, br_no in enumerate(df.No_Rooms):
br_float = int(br_no)
if br_float >= 4:
New_No_Rooms.append(4)
else:
New_No_Rooms.append(br_float)
df.drop(['No_Rooms'], axis = 'columns')
df['No_Rooms'] = New_No_Rooms
bedroom_df_updated = df[['URL','No_Rooms', 'AnnualPrice']]
avg_no_rooms = bedroom_df_updated.groupby('No_Rooms')['AnnualPrice'].mean().reset_index().rename({'AnnualPrice':'Average Annual Price'}, axis = 'columns')
print(avg_no_rooms)
avg_no_rooms.plot(x = 'No_Rooms', y = 'Average Annual Price', kind = 'bar', figsize = [5,5])
sns.boxplot(x = "No_Rooms", y = 'AnnualPrice', data = df)
```
There we go. Now it made sense - the more number of bedrooms an apartment unit has, the higher the annual rent price. However, there no apartment units which are priced way above the other units in the same category. Through evaluating outliers and checking on the source data, we have 'cleaned' the 'No_Rooms' feature for now.
The last step taken for this feature column is grouping the categories of '4', '5', and '6'. There are only 3 units out of our more than 5000 rows which have 5 and 6 bedrooms, and that is not quite representative.
Now, we might ask, why the new category (of units with 4 and more bedrooms) are given the value '4'? Shouldn't it be '4 and more'?
Yes. It represents the number of bedrooms of 4 and more. However, this categorical variable will be treated as ordinal variable in the machine learning model. That's why we have to keep the values as integers. We'll just have to keep in our mind later when writing the final report, that the number '4' in No_Rooms feature not only represents units with 4 bedrooms, but also units with more than 4 bedrooms.
#### Analyzing Location Feature Columns
The next part of our features to be discussed are the columns which describe where our unit is on the map. There are four columns being discussed here - two which are categorical ('Locality' and 'Region'), as well as two continuous columns ('Longitude' and 'Latitude'). First let's look at the 'Region' columns.
```
df.Region.value_counts()
```
Whoa. Turns out the scraped pages also includes apartment units from outside Jakarta and its surroundings. To stay true to our problem statement, we'll remove regions outside 'Jabodetabek'.
```
df = df[(df.Region == 'Jakarta Selatan') | (df.Region == 'Jakarta Barat') | (df.Region == 'Jakarta Pusat') | (df.Region == 'Jakarta Timur') | (df.Region == 'Jakarta Utara') | (df.Region == 'Tangerang') | (df.Region == 'Bekasi') | (df.Region == 'Depok') | (df.Region == 'Bogor')]
```
Let's visualize the data using a boxplot again. Now, we're investigating if differences in regions affect annual rent price.
```
dims = (12,8)
fig, ax = plt.subplots(figsize=dims)
sns.boxplot(x = "Region", y = 'AnnualPrice', data = df, ax=ax)
JakBar = df['Region'] == 'Jakarta Barat'
df[JakBar][['URL', 'AnnualPrice']].sort_values(by = ['AnnualPrice'], ascending=False).head(1)
```
From the visualization, we can see that the region in DKI Jakarta with the highest average annual rent price is 'Jakarta Selatan', followed by 'Jakarta Pusat', 'Jakarta Barat', 'Jakarta Utara', and 'Jakarta Timur' consecutively. Regions outside Jakarta have lower average prices than regions inside Jakarta. This distribution makes sense, as it is quite a common knowledge for Jakartans to know that the region with the highest property price in Jakarta is 'Jakarta Selatan'.
There seems to be an outlier in 'Jakarta Barat', but upon further checking - it's the only unit with 6 bedrooms, so the price reflects more of its number of rooms than its region. We will not remove this data point for now.
There are a few options on how we are going to use the locations columns in our model:
Option 1: Uses one hot encoding on Region. This seems to be the go-to-solution if we wishes to make location a categorical variable. We'll divide the area into six major Regions - West, North, South, East, Center Jakarta, and outside Jakarta (we group Bogor, Depok, Tangerang, and Bekasi into one Region).
Option 2: Uses one hot encoding on Locality. There are over 90 different local districts in this data set, and one hot encoding would mean that we'll have 90+ extra feature columns of zeros and ones. Furthermore, a lot of these local districts have only one apartment unit.
Option 3: Uses the 'Longitude' and 'Latitude' column as continuous variables. This could be the case if we notice a pattern on the longitude and latitude data. We could also do clustering algorithm on longitude and latitude data.
We'll look into the 'Longitude' and 'Latitude' columns first.
```
print(df.Longitude.dtype)
print(df.Latitude.dtype)
```
It seems that these two columns are classified as 'object' and not 'float' by Pandas. We need to transform them first.
```
df = df.reset_index(drop = True, inplace=False)
Longitude_Float = list()
Latitude_Float = list()
for i in range(len(df.Index)):
Longitude_Float.append(float(df.Longitude[i]))
Latitude_Float.append(float(df.Latitude[i]))
df.drop(['Longitude', 'Latitude'], axis = 'columns')
df['Longitude'] = Longitude_Float
df['Latitude'] = Latitude_Float
df.Longitude.plot()
```
After converting both columns to float, let's visualize each column to analyze if there are any outliers. As the geographical location chosen for this project is quite close to each other, there shouldn't be any outliers. The 'Longitude' dataset makes sense: all our apartment units have Longitude between 106.6 until 107.2.
```
df.Latitude.plot()
```
The 'Latitude' feature column, however, seems to have yet another issue related to an error in data entering. Most of the apartment units have Latitude around -6, which makes sense, as Jakarta (and its surrounding) are located slightly beneath the Equator. Howver, there are a few data points which have latitude of 6. This is suspicious as it could very well be a case of forgetting to add '-' (the negative sign) during data entry process for these apartment units. For now, let's assume this to be the case, and put a negative value on the latitude feature of these apartment units.
```
Latitude_fixed = [la if la<0 else -1*la for la in df.Latitude]
df.drop(['Latitude'], axis = 'columns')
df['Latitude'] = Latitude_fixed
df.Latitude.plot()
```
This distribution made more sense as the value of 'Latitude' ranges from -6.6 to -6.1, not a big margin, and the three data points with the lowest 'Latitude' seems to be apartment units outside Jakarta (maybe in Bogor/Depok).
#### Analyzing Furnished Status Feature Column
Now, let's visualize and take a look at the two columns describing the furnished status of each apartment unit - the original 'Furnished', and our newly created 'FurnishedNew'.
```
fig, (ax1, ax2) = plt.subplots(1, 2)
sns.scatterplot(x = "Furnished", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax1)
sns.scatterplot(x = "FurnishedNew", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax2)
fig.tight_layout()
```
There are two takeaways from this: first, the discrepancy between non-furnished and fully furnished apartment units' prices doesn't seem to be that big. Second, our new column, 'FeatureNew', shows that semi-furnished apartments have lower prices compared to non-furnished ones.
What should we make of this? Our new feature column doesn't seem to work well - this might be because not all apartment units which are semi-furnished write that they are 'semi-furnished' in their page name. The population of 'semi-furnished' apartments may be much more than what was being labeled as 'Semi'. This explains two things: why adding an extra category doesn't work well, and why the discrepancy between '0' and '1' is not that far away from each other.
This could indicate that 'Furnished' is not a good predictor for AnnualPrice, but we'll decide it later in the next feature engineering section.
#### Analyzing Floor Position of Apartment Units
The feature column we're looking at this section is the 'Floor' column. We'll see if there are differences in annual rent price between units with different floor positions.
```
sns.boxplot(x = "Floor", y = 'AnnualPrice', data = df)
```
Not only the discrepancy among all floor locations seem to be miniscule, we also have quite a few apartment units with no labels of their floor location. For now, let's not use this categorical variable in our model.
```
df = df.drop(['Floor'], axis = 'columns')
df = df.reset_index(drop = True, inplace=False)
```
#### Analyzing Area of Units to AnnualPrice
```
Area_Float = list()
for i in range(len(df.Index)):
Area_Float.append(float(df.Area[i]))
df.drop(['Area'], axis = 'columns')
df['Area'] = Area_Float
dims = (8,5)
fig, ax = plt.subplots(figsize=dims)
sns.scatterplot(x = "Area", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax)
```
Based on the above plot, we can see that the general trend is that AnnualPrice increases as Area increases. We also see that as number of bedrooms increases, area also increases. However, there are a few data points which are scattered far from the others that we need to investigate. They could be outliers and we should remove them.
```
df[['URL', 'Area']].sort_values(by=['Area'], ascending = False).head(12)
```
There are six apartment units with areas above 500 meter squared. That's a huge apartment unit - two of them even reaches more than seven thousand meter squared. These units are not what in most people's mind when they're looking to rent an apartment unit - as these units come in the form of condominium or penthouse. We'll be removing these six units from our data set. In the deployment stage of this machine learning model, we'll limit the maximum Area to be 350 meter squared, as that is already a very big apartment unit.
```
i = df[((df.Unit_ID == 'tacc001'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'tacc002'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'kmvd027'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'mqrd023'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'csbe001'))].index
df = df.drop(i)
i = df[((df.Unit_ID == 'stme008'))].index
df = df.drop(i)
df = df.reset_index(drop = True, inplace=False)
df.shape
dims = (8,5)
fig, ax = plt.subplots(figsize=dims)
sns.scatterplot(x = "Area", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax)
```
This visualization made more sense as there are no far outliers. However, something we notice is that there are some apartment units which are listed as having 0 Area. We'll simply remove these units as it's impossible for apartment units to have 0 meter squared of area. We'll consider than 20 meter squared is the minimum apartment unit area.
We'll be also removing apartments with more than 250 meter squared because they don't seem to be that common, and most people's preferences are apartments below 200 meter squared.
```
df = df[df['Area']>20]
df = df[df['Area']<250]
df.columns
```
#### Checking Categorical Facility Features
Our last sets of features are the facilities that each unit has. During the web scraping process, I added a column in which it counts how many of these features that the unit has, and store them in a column called 'Total_Facilities'. Let's first take a look at this column, before diving into other facilities one-by-one.
```
Facilities_Int = list()
for i, count in enumerate(df.Total_Facilities):
Facilities_Int.append(int(count))
df.drop(['Total_Facilities'], axis = 'columns')
df['Total_Facilities'] = Facilities_Int
sns.boxplot(x="Total_Facilities", data = df)
sns.scatterplot(x="Total_Facilities", y = "AnnualPrice", data = df)
```
It seems that most apartment units have at least 10 facilities. The more facilities a unit has, the higher its rent price is. Let's take a look at the units which has features less than 10, and see if they actually have less than 10 features, or there are some errors here.
```
df[['URL', 'Total_Facilities', 'AnnualPrice', 'Furnished']].sort_values(by = ['Total_Facilities'], ascending = True).head(10)
```
The apartment units with low Total_Facilities tend to be Non-Furnished units. However, there's an oddball here - the unit with 0 'Total_Facilities' is a fully-furnished unit! Upon further investigation, based on the photos of the room, there are indeed facilities and it might be some errors in inputing the data (or the unit owner/seller does not describe the facilities fully). We are going to remove that unit from our dataset. As for the other fully-furnished unit with only 3 total facilities, the page and pictures show that it is indeed quite a blank unit. There are beds and sofas - but there is no fancy facilities like TV or Internet.
```
i = df[((df.Unit_ID == 'spsa001'))].index
df = df.drop(i)
df = df.reset_index(drop = True, inplace=False)
```
Next, we are going to draw boxplots of each facilities. To recall, if a facility is present in a unit, it will has value '1', if not, it will have the value of '0'. We would like to see if the presence of these facilities impact the annual rent price of apartment units. We'll remove facilities whose existence (or inexistence) does not impact the annual rent price.
```
fig, (ax1, ax2) = plt.subplots(1, 2)
sns.scatterplot(x = "Furnished", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax1)
sns.scatterplot(x = "FurnishedNew", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax2)
fig.tight_layout()
fig, ((ax1, ax2, ax3, ax4), (ax5, ax6, ax7, ax8),
(ax9, ax10, ax11, ax12), (ax13, ax14, ax15, ax16),
(ax17, ax18, ax19, ax20), (ax21, ax22, ax23, ax24)) = plt.subplots(6, 4, figsize = (15,25))
fig.suptitle('Facilities and AnnualPrice Visualization')
for i, ax in enumerate(fig.get_axes()):
column = df.columns[i+11]
sns.boxplot(x = column, y = 'AnnualPrice', data = df, ax = ax)
```
Based on the visualization above, for each facilities, the trend is clear - the presence of facilities affects the unit annual rent price positively. This proves to be quite troublesome when we want to do feature selection - we don't know which facility is less important than the other. We'll keep most facilities for the most part, but we'll reomve two of them right away - 'Electricity' and 'Access Card'. Why? Because most apartment units have them - it's not a 'facility' anymore - it is a necessity. There are 300-400 apartments which are listed as having no 'Electricity', but it doesn't really make sense. We do this because we are thinking about the deployment phase of our model. Our future users won't choose to have an apartment unit without 'Electricity' or 'Access Card'.
This concludes our first part. To recap, we have:
- removed uninsightful columns
- checked and removed outliers
- fixed abnormal data (latitude and misused currency)
- visualize features
We also now have a rough understanding on the annual rent price of apartment units in Jakarta: the most expensive apartments are usually found at Jakarta Selatan - and the more area a unit occupies, the more bedrooms & facilities it has, the higher its annual rent price is.
In the next part, we are going to:
- scale numerical features
- split the dataset into testing and training set
- create and evaluate baseline model
- conduct feature engineering based on the feedback gained on baseline model
- test new models and decide which model is the best
```
df.to_csv('Cleaned Apartment Data.csv')
```
| github_jupyter |
# Reference
To run this code you will need to install [Matplotlib](https://matplotlib.org/users/installing.html) and [Numpy](https://www.scipy.org/install.html)
If you like to run the example locally follow the instructions provided on [Keras website](https://keras.io/#installation)
It's __strongly__ suggested to use a Python environments manager such as [Conda](https://conda.io/docs/) or some kind of [VirutalEnv](#)
[](https://colab.research.google.com/github/digitalideation/digcre_h2101/blob/master/samples/week02/03-shape-example.ipynb)
---
# A second look at a neural network
Let's try to adapt the shape classification model built with the `toyNN` in js before.
We first need to create a dataset _manually_ to do we will define a `draw_shape` function that will help generating some random shape
```
%matplotlib inline
import math
import random
import matplotlib.pyplot as plt
import numpy as np
# 0 = rectangle, 1 = triangle, 2 = ellipse
# return shape
def draw_shape(max_size, type):
# Random size and fixed coordinate
# s = math.floor(random.randrange(1, max_size-4))
# x = math.floor(max_size/2)
# y = math.floor(max_size/2)
# Not so random size and random coordinate
s = int(random.randrange(max_size/2, max_size))
x = int(random.randrange(int(s/2), max_size-int(s/2)))
y = int(random.randrange(int(s/2), max_size-int(s/2)))
type = type%3
if type == 0:
art = plt.Rectangle((x-s/2, y-s/2), s, s, color='r')
if type == 1:
verts = [
(x-s/2, y-s/2),
(x, y+s/2),
(x+s/2, y-s/2)
]
art = plt.Polygon(verts, color='r')
if type == 2:
art = plt.Circle((x, y), s/2, color='r')
return art
```
We also define a helper function that convert a matplotlib figure to a np array
```
# https://stackoverflow.com/a/7821917
def fig2rgb_array(fig):
fig.canvas.draw()
buf = fig.canvas.tostring_rgb()
ncols, nrows = fig.canvas.get_width_height()
return np.frombuffer(buf, dtype=np.uint8).reshape(nrows, ncols, 3)
```
Let's test the function see if it works as expected
```
# Image and dataset size we are going to use
image_size = 48
dataset_size = 5000
# Create plot's figure and axes
# https://stackoverflow.com/a/638443
fig = plt.figure(figsize=(1,1), dpi=image_size)
ax = fig.add_subplot(111)
# Setting for the axes
ax.set_xlim(0,image_size)
ax.set_ylim(0,image_size)
# ax.axis('off')
# Draw a random shape
art = draw_shape(image_size,random.randint(0,2))
# Add the shape to the plot
# https://stackoverflow.com/a/29184075
plt.gcf().gca().add_artist(art)
# gcf() means Get Current Figure
# gca() means Get Current Axis
# convert the figure to an array
data = fig2rgb_array(fig)
print(data.shape)
```
Let's create a loop that will generate a small dataset for us
```
def generate_dataset(image_size, dataset_size):
# Those variable will contain the images and associated labels
images = np.zeros((dataset_size, image_size, image_size, 3))
labels = np.zeros((dataset_size))
# The plot figure we will use to generate the shapes
fig = plt.figure(figsize=(1,1), dpi=image_size)
for i in range(dataset_size):
# Clear the figure
fig.clf()
# Recreate the axes
ax = fig.add_subplot(111)
ax.set_xlim(0, image_size)
ax.set_ylim(0, image_size)
ax.axis('off')
# Define label
label = i%3
art = draw_shape(image_size, label)
plt.gcf().gca().add_artist(art)
# Add values to the arrays
images[i] = fig2rgb_array(fig)
labels[i] = label
return images, labels
# Generate our dataset
images, labels = generate_dataset(image_size, dataset_size)
print(images.shape)
print(labels.shape)
```
Eventually we can save our dataset for later, since it takes quite some time to generate it 😉
```
np.save('datasets/shape-example-shapes1.npy', images)
np.save('datasets/shape-example-labels1.npy', labels)
```
If we need to load it we can then use the following code
```
images = np.load('datasets/shape-example-shapes1.npy')
labels = np.load('datasets/shape-example-labels1.npy')
```
We split our dataset manually in training and testing set
```
# Define the size of the training set, here we use 80% of the total samples for training
train_size = int(dataset_size*.8)
# TODO: We should shuffle the dataset
# Split the dataset into train and test dataset
train_images, test_images = images[:train_size], images[train_size:]
train_labels, test_labels = labels[:train_size], labels[train_size:]
# Verify the data
print(train_images.shape)
print(train_labels.shape)
# sample_images = []
# for label, image in list(zip(train_labels, train_images))[:10]:
# fig1, ax1 = plt.subplots()
# ax1.axis('off')
# plt.title(label)
# fig1.add_subplot(111).imshow(image/255)
full_image = np.concatenate(train_images[:12]/255, axis=1)
plt.figure(figsize=(16,4))
plt.imshow(full_image)
```
Now we can create our model
```
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Dense(512, activation="sigmoid"),
layers.Dense(3, activation="softmax")
])
```
And compile it
```
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in the `[0, 1]` interval. Then we also need to categorically encode the labels.
```
# Reshape data
train_images = train_images.reshape((len(train_images), 3 * image_size * image_size))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((len(test_images), 3 * image_size * image_size))
test_images = test_images.astype('float32') / 255
# Encode to categorical
from tensorflow.keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
```
Then we can start the training
```
model.fit(train_images, train_labels, epochs=100, batch_size=128)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
for label, image in list(zip(test_labels, test_images)):
prediction = model.predict(np.array([image,]))
if not prediction.argmax() == label.argmax():
image = image.reshape(48, 48, 3)
fig1, ax1 = plt.subplots()
ax1.axis('off')
plt.title('predicted:' + str(prediction.argmax()))
fig1.add_subplot(111).imshow(image)
```
| github_jupyter |
# ELG Signal-to-Noise Calculations
This notebook provides a standardized calculation of the DESI emission-line galaxy (ELG) signal-to-noise (SNR) figure of merit, for tracking changes to simulation inputs and models. See the accompanying technical note [DESI-3977](https://desi.lbl.gov/DocDB/cgi-bin/private/ShowDocument?docid=3977) for details.
```
%pylab inline
import astropy.table
import astropy.cosmology
import astropy.io.fits as fits
import astropy.units as u
```
Parts of this notebook assume that the [desimodel package](https://github.com/desihub/desimodel) is installed (both its git and svn components) and its `data/` directory is accessible via the `$DESIMODEL` environment variable:
```
import os.path
assert 'DESIMODEL' in os.environ
assert os.path.exists(os.path.join(os.getenv('DESIMODEL'), 'data', 'spectra', 'spec-sky.dat'))
```
Document relevant version numbers:
```
import desimodel
import specsim
print(f'Using desimodel {desimodel.__version__}, specsim {specsim.__version__}')
```
## ELG Spectrum
All peaks are assumed to have the same log-normal rest lineshape specified by a velocity dispersion $\sigma_v$, total flux $F_0$ and central wavelength $\lambda_0$ as:
$$
f(\lambda; F_0, \lambda_0) = \frac{F_0}{\sqrt{2\pi}\,\lambda\,\sigma_{\log}}\, \exp\left[
-\frac{1}{2}\left( \frac{\log_{10}\lambda - \log_{10}\lambda_0}{\sigma_{\log}}\right)^2\right]\; ,
$$
where
$$
\sigma_{\log} \equiv \frac{\sigma_v}{c \log 10} \; .
$$
We use the pretabulated spectrum in `$DESIMODEL/data/spectra/spec-elg-o2flux-8e-17-average-line-ratios.dat` described in Section 2.3 of DESI-867-v1,
which consists of only the following emission lines:
- \[OII](3727A) and \[OII](3730A)
- H-beta
- \[OIII](4960A) and \[OIII](5008A)
- H-alpha
Note that H-alpha is never observable for $z > 0.5$, as is always the case for DESI ELG targets.
Continuum is omitted since we are primarily interested in how well the \[OII] doublet can be identified and measured.
All lines are assumed to have the same velocity dispersion of 70 km/s.
```
elg_spec = astropy.table.Table.read(
os.path.join(os.environ['DESIMODEL'], 'data', 'spectra', 'spec-elg-o2flux-8e-17-average-line-ratios.dat'),
format='ascii')
elg_wlen0 = elg_spec['col1'].data
elg_flux0 = 1e-17 * elg_spec['col2'].data
```
## DESI ELG Sample
Look up the expected redshift distribution of DESI ELG targets from `$DESIMODEL/data/targets/nz_elg.dat`. Note that the [OII] doublet falls off the spectrograph around z = 1.63.
```
def get_elg_nz():
# Read the nz file from $DESIMODEL.
full_name = os.path.join(os.environ['DESIMODEL'], 'data', 'targets', 'nz_elg.dat')
table = astropy.table.Table.read(full_name, format='ascii')
# Extract the n(z) histogram into numpy arrays.
z_lo, z_hi = table['col1'], table['col2']
assert np.all(z_hi[:-1] == z_lo[1:])
z_edge = np.hstack((z_lo, [z_hi[-1]]))
nz = table['col3']
# Trim to bins where n(z) > 0.
non_zero = np.where(nz > 0)[0]
lo, hi = non_zero[0], non_zero[-1] + 1
nz = nz[lo: hi]
z_edge = z_edge[lo: hi + 1]
return nz, z_edge
elg_nz, elg_z_edge = get_elg_nz()
```
Calculate n(z) weights corresponding to an array of ELG redshifts:
```
def get_nz_weight(z):
"""Calculate n(z) weights corresponding to input z values.
"""
nz = np.zeros_like(z)
idx = np.digitize(z, elg_z_edge)
sel = (idx > 0) & (idx <= len(elg_nz))
nz[sel] = elg_nz[idx[sel] - 1]
return nz
```
Sample random redshifts from n(z):
```
def generate_elg_z(n=100, seed=123):
cdf = np.cumsum(elg_nz)
cdf = np.hstack(([0], cdf / cdf[-1]))
gen = np.random.RandomState(seed)
return np.interp(gen.rand(n), cdf, elg_z_edge)
z=generate_elg_z(n=20000)
plt.hist(z, bins=elg_z_edge, histtype='stepfilled')
plt.xlim(elg_z_edge[0], elg_z_edge[-1])
print(f'Mean ELG redshift is {np.mean(z):.3f}')
```
Define a background cosmology for the angular-diameter distance used to scale galaxy angular sizes:
```
LCDM = astropy.cosmology.Planck15
```
Generate random ELG profiles for each target. The mean half-light radius is 0.45" and scales with redshift.
```
def generate_elg_profiles(z, seed=123, verbose=False):
"""ELG profiles are assumed to be disk (Sersic n=1) only.
"""
gen = np.random.RandomState(seed)
nsrc = len(z)
source_fraction = np.zeros((nsrc, 2))
source_half_light_radius = np.zeros((nsrc, 2))
source_minor_major_axis_ratio = np.zeros((nsrc, 2))
source_position_angle = 360. * gen.normal(size=(nsrc, 2))
# Precompute cosmology scale factors.
angscale = (
LCDM.angular_diameter_distance(1.0) /
LCDM.angular_diameter_distance(z)).to(1).value
if verbose:
print(f'mean n(z) DA(1.0)/DA(z) = {np.mean(angscale):.3f}')
# Disk only with random size and ellipticity.
source_fraction[:, 0] = 1.
source_half_light_radius[:, 0] = 0.427 * np.exp(0.25 * gen.normal(size=nsrc)) * angscale
source_minor_major_axis_ratio[:, 0] = np.minimum(0.99, 0.50 * np.exp(0.15 * gen.normal(size=nsrc)))
if verbose:
print(f'mean HLR = {np.mean(source_half_light_radius[:, 0]):.3f}"')
return dict(
source_fraction=source_fraction,
source_half_light_radius=source_half_light_radius,
source_minor_major_axis_ratio=source_minor_major_axis_ratio,
source_position_angle=source_position_angle)
```
Diagnostic plot showing the assumed ELG population (Figure 1 of DESI-3977):
```
def plot_elg_profiles(save=None):
z = generate_elg_z(50000)
sources = generate_elg_profiles(z, verbose=True)
fig, ax = plt.subplots(2, 2, figsize=(8, 6))
ax = ax.flatten()
ax[0].hist(sources['source_minor_major_axis_ratio'][:, 0], range=(0,1), bins=25)
ax[0].set_xlabel('ELG minor/major axis ratio')
ax[0].set_xlim(0, 1)
ax[1].hist(z, bins=np.arange(0.6, 1.8, 0.1))
ax[1].set_xlim(0.6, 1.7)
ax[1].set_xlabel('ELG redshift')
ax[2].hist(sources['source_half_light_radius'][:, 0], bins=25)
ax[2].set_xlabel('ELG half-light radius [arcsec]')
ax[2].set_xlim(0.1, 1.1)
ax[3].scatter(z, sources['source_half_light_radius'][:, 0], s=0.5, alpha=0.5)
ax[3].set_xlabel('ELG redshift')
ax[3].set_ylabel('ELG half-light radius [arcsec]')
ax[3].set_xlim(0.6, 1.7)
ax[3].set_ylim(0.1, 1.1)
plt.tight_layout()
if save:
plt.savefig(save)
plot_elg_profiles(save='elg-sample.png')
```
## Simulated SNR
Given an initialized simulator object, step through different redshifts and calculate the SNR recorded by all fibers for a fixed ELG spectrum. Save the results to a FITS file that can be used by `plot_elg_snr()`.
```
def calculate_elg_snr(simulator, save, description,
z1=0.6, z2=1.65, dz=0.002, zref=1.20,
seed=123, wlen=elg_wlen0, flux=elg_flux0):
"""Calculate the ELG [OII] SNR as a function of redshift.
Parameters
----------
simulator : specsim.simulator.Simulator
Instance of an initialized Simulator object to use. Each fiber will
be simulated independently to study variations across the focal plane.
save : str
Filename to use for saving FITS results.
description : str
Short description for the saved file header, also used for plots later.
z1 : float
Minimum ELG redshift to calculate.
z2 : float
Maximum ELG redshift to calculate.
dz : float
Spacing of equally spaced grid to cover [z1, z2]. z2 will be increased
by up to dz if necessary.
zref : float
Reference redshift used to save signal, noise and fiberloss. Must be
on the grid specified by (z1, z2, dz).
seed : int or None
Random seed used to generate fiber positions and galaxy profiles.
wlen : array
1D array of N rest wavelengths in Angstroms.
flux : array
1D array of N corresponding rest fluxes in erg / (s cm2 Angstrom).
"""
zooms = (3715., 3742.), (4850., 4875.), (4950., 5020.)
gen = np.random.RandomState(seed=seed)
# Generate random focal plane (x,y) positions for each fiber in mm units.
nfibers = simulator.num_fibers
focal_r = np.sqrt(gen.uniform(size=nfibers)) * simulator.instrument.field_radius
phi = 2 * np.pi * gen.uniform(size=nfibers)
xy = (np.vstack([np.cos(phi), np.sin(phi)]) * focal_r).T
# Build the grid of redshifts to simulate.
nz = int(np.ceil((z2 - z1) / dz)) + 1
z2 = z1 + (nz - 1) * dz
z_grid = np.linspace(z1, z2, nz)
iref = np.argmin(np.abs(z_grid - zref))
assert np.abs(zref - z_grid[iref]) < 1e-5, 'zref not in z_grid'
snr2 = np.zeros((4, nz, simulator.num_fibers))
# Initialize the results.
hdus = fits.HDUList()
hdus.append(fits.PrimaryHDU(
header=fits.Header({'SEED': seed, 'NFIBERS': nfibers, 'DESCRIBE': description})))
# Zero-pad the input spectrum if necessary.
wlo = 0.99 * desi.simulated['wavelength'][0] / (1 + z2)
if wlen[0] > wlo:
wlen = np.hstack([[wlo], wlen])
flux = np.hstack([[0.], flux])
# Simulate the specified rest-frame flux.
simulator.source.update_in(
'ELG [OII] doublet', 'elg',
wlen * u.Angstrom, flux * u.erg/(u.s * u.cm**2 * u.Angstrom), z_in=0.)
# Simulate each redshift.
for i, z in enumerate(z_grid):
# Redshift the ELG spectrum.
simulator.source.update_out(z_out=z)
source_flux = np.tile(simulator.source.flux_out, [nfibers, 1])
# Generate source profiles for each target at this redshift. Since the seed is
# fixed, only the redshift scaling of the HLR will change.
sources = generate_elg_profiles(np.full(nfibers, z), seed=seed)
# Simulate each source.
simulator.simulate(source_fluxes=source_flux, focal_positions=xy, **sources)
# Calculate the quadrature sum of SNR in each camera, by fiber.
for output in simulator.camera_output:
rest_wlen = output['wavelength'] / (1 + z)
# Loop over emission lines.
for j, (lo, hi) in enumerate(zooms):
sel = (rest_wlen >= lo) & (rest_wlen < hi)
if not np.any(sel):
continue
# Sum SNR2 over pixels.
pixel_snr2 = output['num_source_electrons'][sel] ** 2 / output['variance_electrons'][sel]
snr2[j, i] += pixel_snr2.sum(axis=0)
if i == iref:
# Save the fiberloss fraction and total variance tabulated on the simulation grid.
table = astropy.table.Table(meta={'ZREF': zref})
sim = simulator.simulated
table['WLEN'] = sim['wavelength'].data
table['FLUX'] = sim['source_flux'].data
table['FIBERLOSS'] = sim['fiberloss'].data
table['NSRC'] = sim['num_source_electrons_b'] + sim['num_source_electrons_r'] + sim['num_source_electrons_z']
table['SKYVAR'] = sim['num_sky_electrons_b'] + sim['num_sky_electrons_r'] + sim['num_sky_electrons_z']
table['NOISEVAR'] = (
sim['read_noise_electrons_b'] ** 2 + sim['read_noise_electrons_r'] ** 2 + sim['read_noise_electrons_z'] ** 2 +
sim['num_dark_electrons_b'] + sim['num_dark_electrons_r'] + sim['num_dark_electrons_z'])
hdus.append(fits.table_to_hdu(table))
hdus[-1].name = 'REF'
# Calculate the n(z) weighted mean SNR for [OII], using the median over fibers at each redshift.
snr_oii = np.median(np.sqrt(snr2[0]), axis=-1)
wgt = get_nz_weight(z_grid)
snr_oii_eff = np.sum(snr_oii * wgt) / np.sum(wgt)
print(f'n(z)-weighted effective [OII] SNR = {snr_oii_eff:.3f}')
# Save the SNR vs redshift arrays for each emission line.
table = astropy.table.Table(meta={'SNREFF': snr_oii_eff})
table['Z'] = z_grid
table['ZWGT'] = wgt
table['SNR_OII'] = np.sqrt(snr2[0])
table['SNR_HBETA'] = np.sqrt(snr2[1])
table['SNR_OIII'] = np.sqrt(snr2[2])
hdus.append(fits.table_to_hdu(table))
hdus[-1].name = 'SNR'
hdus.writeto(save, overwrite=True)
```
Calculate flux limits in bins of redshift, to compare with SRD L3.1.3:
```
def get_flux_limits(z, snr, nominal_flux=8., nominal_snr=7., ax=None):
fluxlim = np.zeros_like(snr)
nonzero = snr > 0
fluxlim[nonzero] = nominal_flux * (nominal_snr / snr[nonzero])
bins = np.linspace(0.6, 1.6, 6)
nlim = len(bins) - 1
medians = np.empty(nlim)
for i in range(nlim):
sel = (z >= bins[i]) & (z < bins[i + 1])
medians[i] = np.median(fluxlim[sel])
if ax is not None:
zmid = 0.5 * (bins[1:] + bins[:-1])
dz = 0.5 * (bins[1] - bins[0])
ax.errorbar(zmid, medians, xerr=dz, color='b', fmt='o', zorder=10, capsize=3)
return fluxlim, medians
```
Plot a summary of the results saved by `calculate_elg_snr()`. Shaded bands show the 5-95 percentile range, with the median drawn as a solid curve. The fiberloss in the lower plot is calculated at the redshift `zref` specified in `calculate_elg_snr()` (since the ELG size distribution is redshift dependent).
```
def plot_elg_snr(name, save=True):
"""Plot a summary of results saved by calculate_elg_snr().
Parameters
----------
name : str
Name of the FITS file saved by calculate_elg_snr().
"""
hdus = fits.open(name)
hdr = hdus[0].header
nfibers = hdr['NFIBERS']
description = hdr['DESCRIBE']
fig, axes = plt.subplots(2, 1, figsize=(8, 6))
plt.suptitle(description, fontsize=14)
snr_table = astropy.table.Table.read(hdus['SNR'])
snr_oii_eff = snr_table.meta['SNREFF']
ref_table = astropy.table.Table.read(hdus['REF'])
zref = ref_table.meta['ZREF']
ax = axes[0]
color = 'rgb'
labels = '[OII]', 'H$\\beta$', '[OIII]'
z_grid = snr_table['Z'].data
for i, tag in enumerate(('SNR_OII', 'SNR_HBETA', 'SNR_OIII')):
snr = snr_table[tag].data
snr_q = np.percentile(snr, (5, 50, 95), axis=-1)
ax.fill_between(z_grid, snr_q[0], snr_q[2], color=color[i], alpha=0.25, lw=0)
ax.plot(z_grid, snr_q[1], c=color[i], ls='-', label=labels[i])
ax.plot([], [], 'k:', label='n(z)')
ax.legend(ncol=4)
ax.set_xlabel('ELG redshift')
ax.set_ylabel(f'Total signal-to-noise ratio')
ax.axhline(7, c='k', ls='--')
rhs = ax.twinx()
rhs.plot(z_grid, snr_table['ZWGT'], 'k:')
rhs.set_yticks([])
ax.set_xlim(z_grid[0], z_grid[-1])
ax.set_ylim(0, 12)
rhs.set_ylim(0, None)
ax.text(0.02, 0.03, f'n(z)-wgtd [OII] SNR={snr_oii_eff:.3f}',
fontsize=12, transform=ax.transAxes)
# Calculate the median [OII] flux limits.
_, fluxlim = get_flux_limits(z_grid, np.median(snr_table['SNR_OII'], axis=-1))
# Print latex-format results for DESI-3977 Table 2.
print(f'&{snr_oii_eff:7.3f}', end='')
for m in fluxlim:
print(f' &{m:5.1f}', end='')
print(' \\\\')
ax = axes[1]
wlen = ref_table['WLEN'].data
dwlen = wlen[1] - wlen[0]
sky_q = np.percentile(ref_table['SKYVAR'].data, (5, 50, 95), axis=-1)
sky_q[sky_q > 0] = 1 / sky_q[sky_q > 0]
ax.fill_between(wlen, sky_q[0], sky_q[2], color='b', alpha=0.5, lw=0)
ax.plot([], [], 'b-', label='sky ivar')
ax.plot(wlen, sky_q[1], 'b.', ms=0.25, alpha=0.5)
noise_q = np.percentile(ref_table['NOISEVAR'].data, (5, 50, 95), axis=-1)
noise_q[noise_q > 0] = 1 / noise_q[noise_q > 0]
ax.fill_between(wlen, noise_q[0], noise_q[2], color='r', alpha=0.25, lw=0)
ax.plot(wlen, noise_q[1], c='r', ls='-', label='noise ivar')
floss_q = np.percentile(ref_table['FIBERLOSS'].data, (5, 50, 95), axis=-1)
ax.plot([], [], 'k-', label='fiberloss')
rhs = ax.twinx()
rhs.fill_between(wlen, floss_q[0], floss_q[2], color='k', alpha=0.25, lw=0)
rhs.plot(wlen, floss_q[1], 'k-')
rhs.set_ylim(0.2, 0.6)
rhs.yaxis.set_major_locator(matplotlib.ticker.MultipleLocator(0.1))
rhs.set_ylabel('Fiberloss')
ax.set_xlabel('Wavelength [A]')
ax.set_ylabel(f'Inverse Variance / {dwlen:.1f}A')
ax.set_xlim(wlen[0], wlen[-1])
ax.set_ylim(0, 0.25)
ax.legend(ncol=3)
plt.subplots_adjust(wspace=0.1, top=0.95, bottom=0.08, left=0.10, right=0.92)
if save:
base, _ = os.path.splitext(name)
plot_name = base + '.png'
plt.savefig(plot_name)
print(f'Saved {plot_name}')
```
## Examples
Demonstrate this calculation for the baseline DESI configuration with 100 fibers:
```
import specsim.simulator
desi = specsim.simulator.Simulator('desi', num_fibers=100)
```
**NOTE: the next cell takes about 15 minutes to run.**
```
%time calculate_elg_snr(desi, save='desimodel-0.9.6.fits', description='desimodel 0.9.6')
```
Plot the results (Figure 2 of DESI-3977):
```
plot_elg_snr('desimodel-0.9.6.fits')
```
Check that the results with GalSim are compatible with those using the (default) fastsim mode of fiberloss calculations:
```
desi.instrument.fiberloss_method = 'galsim'
```
**NOTE: the next cell takes about 30 minutes to run.**
```
%time calculate_elg_snr(desi, save='desimodel-0.9.6-galsim.fits', description='desimodel 0.9.6 (galsim)')
plot_elg_snr('desimodel-0.9.6-galsim.fits')
```
This comparison shows that the "fastsim" fiberloss fractions are about 1% (absolute) higher than "galsim", leading to a slight increase in signal and therefore SNR. The reason for this increase is that "fastsim" assumes a fixed minor / major axis ratio of 0.7 while our ELG population has a distribution of ratios with a median of 0.5. The weighted [OII] SNR values are 6.764 (fastsim) and 6.572 (galsim), which agree at the few percent level.
We use GalSim fiberloss calculations consistently in Figure 2 and Table 2 of DESI-3977.
### CDR Comparison
Compare with the CDR forecasts based on desimodel 0.3.1 and documented in DESI-867, using data from this [FITS file](https://desi.lbl.gov/svn/docs/technotes/spectro/elg-snr/trunk/data/elg_snr2_desimodel-0-3-1.fits):
```
desi867 = astropy.table.Table.read('elg_snr2_desimodel-0-3-1.fits', hdu=1)
```
Check that we can reproduce the figures from DESI-867:
```
def desi_867_fig1():
z = desi867['Z']
snr_all = np.sqrt(desi867['SNR2'])
snr_oii = np.sqrt(desi867['SNR2_OII'])
fig = plt.figure(figsize=(6, 5))
plt.plot(z, snr_all, 'k-', lw=1, label='all lines')
plt.plot(z, snr_oii, 'r-', lw=1, label='[OII] only')
plt.legend(fontsize='large')
plt.axhline(7, c='b', ls='--')
plt.ylim(0, 22)
plt.xlim(z[0], z[-1])
plt.xticks([0.5, 1.0, 1.5])
plt.xlabel('Redshift')
plt.ylabel('S/N')
desi_867_fig1()
def desi_867_fig2():
z = desi867['Z']
snr_all = np.sqrt(desi867['SNR2'])
snr_oii = np.sqrt(desi867['SNR2_OII'])
flux_limit_all, _ = get_flux_limits(z, snr_all)
flux_limit_oii, medians = get_flux_limits(z, snr_oii)
fig = plt.figure(figsize=(6, 5))
plt.plot(z, flux_limit_all, 'k-', lw=1, label='all lines')
plt.plot(z, flux_limit_oii, 'r-', lw=1, label='[OII] only')
plt.legend(loc='upper right', fontsize='large')
_, _ = get_flux_limits(z, snr_oii, ax=plt.gca())
plt.ylim(0, 40)
plt.xlim(z[0], z[-1])
plt.xticks([0.5, 1.0, 1.5])
plt.xlabel('Redshift')
plt.ylabel('[OII] Flux limit ($10^{-17}$ ergs cm$^{-2}$ s$^{-1}$)')
desi_867_fig2()
```
Print a summary for Table 2 of DESI-3977:
```
def cdr_summary():
z = desi867['Z']
snr_oii = np.sqrt(desi867['SNR2_OII'])
wgt = get_nz_weight(z)
snreff = np.sum(wgt * snr_oii) / wgt.sum()
_, medians = get_flux_limits(z, snr_oii)
print(f'0.3.1 (CDR) & {snreff:6.3f}', end='')
for m in medians:
print(f' &{m:5.1f}', end='')
print(' \\\\')
cdr_summary()
```
| github_jupyter |
```
#hide
#default_exp cli
from nbdev.showdoc import show_doc
```
# Command line functions
> Console commands added by the nbdev library
```
#export
from nbdev.imports import *
from chisel_nbdev.export_scala import *
from chisel_nbdev.sync_scala import *
from nbdev.merge import *
from chisel_nbdev.export_scala2html import *
from chisel_nbdev.clean_scala import *
from chisel_nbdev.test_scala import *
from fastcore.script import *
```
`nbdev` comes with the following commands. To use any of them, you must be in one of the subfolders of your project: they will search for the `settings.ini` recursively in the parent directory but need to access it to be able to work. Their names all begin with nbdev so you can easily get a list with tab completion.
- `chisel_nbdev_build_docs` builds the documentation from the notebooks
- `chisel_nbdev_build_lib` builds the library from the notebooks
- `chisel_nbdev_bump_version` increments version in `settings.py` by one
- `chisel_nbdev_clean_nbs` removes all superfluous metadata form the notebooks, to avoid merge conflicts
- `chisel_nbdev_detach` exports cell attachments to `dest` and updates references
- `chisel_nbdev_diff_nbs` gives you the diff between the notebooks and the exported library
- `chisel_nbdev_fix_merge` will fix merge conflicts in a notebook file
- `chisel_nbdev_install_git_hooks` installs the git hooks that use the last two command automatically on each commit/merge
- `chisel_nbdev_nb2md` converts a notebook to a markdown file
- `chisel_nbdev_new` creates a new nbdev project
- `chisel_nbdev_read_nbs` reads all notebooks to make sure none are broken
- `chisel_nbdev_test_nbs` runs tests in notebooks
- `chisel_nbdev_trust_nbs` trusts all notebooks (so that the HTML content is shown)
- `chisel_nbdev_update_lib` propagates any change in the library back to the notebooks
## Navigating from notebooks to script and back
```
show_doc(nbdev_build_lib)
```
By default (`fname` left to `None`), the whole library is built from the notebooks in the `lib_folder` set in your `settings.ini`.
```
show_doc(nbdev_update_lib)
```
By default (`fname` left to `None`), the whole library is treated. Note that this tool is only designed for small changes such as typo or small bug fixes. You can't add new cells in notebook from the library.
```
show_doc(nbdev_diff_nbs)
```
## Running tests
```
show_doc(nbdev_test_nbs)
```
By default (`fname` left to `None`), the whole library is tested from the notebooks in the `lib_folder` set in your `settings.ini`.
## Building documentation
```
show_doc(nbdev_build_docs)
```
By default (`fname` left to `None`), the whole documentation is build from the notebooks in the `lib_folder` set in your `settings.ini`, only converting the ones that have been modified since the their corresponding html was last touched unless you pass `force_all=True`. The index is also converted to make the README file, unless you pass along `mk_readme=False`.
```
show_doc(nbdev_nb2md)
show_doc(nbdev_detach)
```
## Other utils
```
show_doc(nbdev_read_nbs)
```
By default (`fname` left to `None`), the all the notebooks in `lib_folder` are checked.
```
show_doc(nbdev_trust_nbs)
```
By default (`fname` left to `None`), the all the notebooks in `lib_folder` are trusted. To speed things up, only the ones touched since the last time this command was run are trusted unless you pass along `force_all=True`.
```
show_doc(nbdev_fix_merge)
```
When you have merge conflicts after a `git pull`, the notebook file will be broken and won't open in jupyter notebook anymore. This command fixes this by changing the notebook to a proper json file again and add markdown cells to signal the conflict, you just have to open that notebook again and look for `>>>>>>>` to see those conflicts and manually fix them. The old broken file is copied with a `.ipynb.bak` extension, so is still accessible in case the merge wasn't successful.
Moreover, if `fast=True`, conflicts in outputs and metadata will automatically be fixed by using the local version if `trust_us=True`, the remote one if `trust_us=False`. With this option, it's very likely you won't have anything to do, unless there is a real conflict.
```
#export
def bump_version(version, part=2):
version = version.split('.')
version[part] = str(int(version[part]) + 1)
for i in range(part+1, 3): version[i] = '0'
return '.'.join(version)
test_eq(bump_version('0.1.1' ), '0.1.2')
test_eq(bump_version('0.1.1', 1), '0.2.0')
#export
@call_parse
def nbdev_bump_version(part:Param("Part of version to bump", int)=2):
"Increment version in `settings.py` by one"
cfg = Config()
print(f'Old version: {cfg.version}')
cfg.d['version'] = bump_version(Config().version, part)
cfg.save()
update_version()
print(f'New version: {cfg.version}')
```
## Git hooks
```
#export
@call_parse
def nbdev_install_git_hooks():
"Install git hooks to clean/trust notebooks automatically"
try: path = Config().config_file.parent
except: path = Path.cwd()
hook_path = path/'.git'/'hooks'
fn = hook_path/'post-merge'
hook_path.mkdir(parents=True, exist_ok=True)
#Trust notebooks after merge
fn.write_text("#!/bin/bash\necho 'Trusting notebooks'\nchisel_nbdev_trust_nbs")
os.chmod(fn, os.stat(fn).st_mode | stat.S_IEXEC)
#Clean notebooks on commit/diff
(path/'.gitconfig').write_text("""# Generated by chisel_nbdev_install_git_hooks
#
# If you need to disable this instrumentation do:
# git config --local --unset include.path
#
# To restore the filter
# git config --local include.path .gitconfig
#
# If you see notebooks not stripped, checked the filters are applied in .gitattributes
#
[filter "clean-nbs"]
clean = chisel_nbdev_clean_nbs --read_input_stream True
smudge = cat
required = true
[diff "ipynb"]
textconv = chisel_nbdev_clean_nbs --disp True --fname
""")
cmd = "git config --local include.path ../.gitconfig"
print(f"Executing: {cmd}")
run(cmd)
print("Success: hooks are installed and repo's .gitconfig is now trusted")
try: nb_path = Config().path("nbs_path")
except: nb_path = Path.cwd()
(nb_path/'.gitattributes').write_text("**/*.ipynb filter=clean-nbs\n**/*.ipynb diff=ipynb\n")
```
This command installs git hooks to make sure notebooks are cleaned before you commit them to GitHub and automatically trusted at each merge. To be more specific, this creates:
- an executable '.git/hooks/post-merge' file that contains the command `nbdev_trust_nbs`
- a `.gitconfig` file that uses `nbev_clean_nbs` has a filter/diff on all notebook files inside `nbs_folder` and a `.gitattributes` file generated in this folder (copy this file in other folders where you might have notebooks you want cleaned as well)
## Starting a new project
```
#export
_template_git_repo = "https://github.com/fastai/nbdev_template.git"
#export
import tarfile
#export
def extract_tgz(url, dest='.'):
with urlopen(url) as u: tarfile.open(mode='r:gz', fileobj=u).extractall(dest)
#export
@call_parse
def nbdev_new():
"Create a new nbdev project from the current git repo"
url = run('git config --get remote.origin.url')
if not url: raise Exception('This does not appear to be a cloned git directory with a remote')
author = run('git config --get user.name').strip()
email = run('git config --get user.email').strip()
if not (author and email): raise Exception('User name and email not configured in git')
# download and untar template, and optionally notebooks
FILES_URL = 'https://files.fast.ai/files/'
extract_tgz(f'{FILES_URL}nbdev_files.tgz')
path = Path()
for o in (path/'nbdev_files').ls():
if not Path(f'./{o.name}').exists(): shutil.move(str(o), './')
shutil.rmtree('nbdev_files')
if first(path.glob('*.ipynb')): print("00_core.ipynb not downloaded since a notebook already exists.")
else: urlsave(f'{FILES_URL}00_core.ipynb')
if not (path/'index.ipynb').exists(): urlsave(f'{FILES_URL}index.ipynb')
# auto-config settings.ini from git
settings_path = Path('settings.ini')
settings = settings_path.read_text()
owner,repo = repo_details(url)
branch = run('git symbolic-ref refs/remotes/origin/HEAD').strip().split('/')[-1]
settings = settings.format(lib_name=repo, user=owner, author=author, author_email=email, branch=branch)
settings_path.write_text(settings)
nbdev_install_git_hooks()
if not (path/'LICENSE').exists() and not (path/'LICENSE.md').exists():
warnings.warn('No LICENSE file found - you will need one if you will create pypi or conda packages.')
```
`nbdev_new` is a command line tool that creates a new nbdev project from the current directory, which must be a cloned git repo.
After you run `nbdev_new`, please check the contents of `settings.ini` look good, and then run `nbdev_build_lib`.
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/intro/pandas_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Manipulating and visualizing tabular data using pandas
[Pandas](https://pandas.pydata.org/) is a widely used Python library for storing and manipulating tabular data, where feature columns may be of different types (e.g., scalar, ordinal, categorical, text). We give some examples of how to use it below. We also illustrate some ways to plot data using matplotlib.
For very large datasets, you might want to use [modin](https://github.com/modin-project/modin), which provides the same pandas API but scales to multiple cores, by using [dask](https://github.com/dask/dask) or [ray](https://github.com/ray-project/ray).
### Install necessary libraries
```
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option('precision', 2) # 2 decimal places
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_columns', 30)
pd.set_option('display.width', 100) # wide windows
```
### Auto-mpg dataset <a class="anchor" id="EDA-autompg"></a>
```
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Year', 'Origin', 'Name']
df = pd.read_csv(url, names=column_names, sep='\s+', na_values="?")
# The last column (name) is a unique id for the car, so we drop it
df = df.drop(columns=['Name'])
df.info()
```
We notice that there are only 392 horsepower rows, but 398 of the others.
This is because the HP column has 6 **missing values** (also called NA, or
not available).
There are 3 main ways to deal with this:
- Drop the rows with any missing values using dropna()
- Drop any columns with any missing values using drop()
- Replace the missing vales with some other valye (eg the median) using fillna. (This is called missing value imputation.)
For simplicity, we adopt the first approach.
```
# Ensure same number of rows for all features.
df = df.dropna()
df.info()
# Summary statistics
df.describe(include='all')
# Convert Origin feature from int to categorical factor
df['Origin'] = df.Origin.replace([1,2,3],['USA','Europe','Japan'])
df['Origin'] = df['Origin'].astype('category')
# Let us check the categories (levels)
print(df['Origin'].cat.categories)
# Let us check the datatypes of all the features
print(df.dtypes)
# Let us inspect the data. We see meaningful names for Origin.
df.tail()
# Create latex table from first 5 rows
tbl = df[-5:].to_latex(index=False, escape=False)
print(tbl)
# Plot mpg distribution for cars from different countries of origin
data = pd.concat( [df['MPG'], df['Origin']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Origin', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-origin-boxplot.pdf'))
plt.show()
# Plot mpg distribution for cars from different years
data = pd.concat( [df['MPG'], df['Year']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Year', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-year-boxplot.pdf'))
plt.show()
```
### Iris dataset <a class="anchor" id="EDA-iris"></a>
```
# Get the iris dataset and look at it
from sklearn.datasets import load_iris
iris = load_iris()
# show attributes of this object
print(dir(iris))
# Extract numpy arrays
X = iris.data
y = iris.target
print(np.shape(X)) # (150, 4)
print(np.c_[X[0:3,:], y[0:3]]) # concatenate columns
# The data is sorted by class. Let's shuffle the rows.
N = np.shape(X)[0]
rng = np.random.RandomState(42)
perm = rng.permutation(N)
X = X[perm]
y = y[perm]
print(np.c_[X[0:3,:], y[0:3]])
# Convert to pandas dataframe
df = pd.DataFrame(data=X, columns=['sl', 'sw', 'pl', 'pw'])
# create column for labels
df['label'] = pd.Series(iris.target_names[y], dtype='category')
# Summary statistics
df.describe(include='all')
# Peak at the data
df.head()
# Create latex table from first 5 rows
tbl = df[:6].to_latex(index=False, escape=False)
print(tbl)
# 2d scatterplot
#https://seaborn.pydata.org/generated/seaborn.pairplot.html
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
# Make a dataframe with nicer labels for printing
#iris_df = sns.load_dataset("iris")
iris_df = df.copy()
iris_df.columns = iris['feature_names'] + ['label']
g = sns.pairplot(iris_df, vars = iris_df.columns[0:3] , hue="label")
#save_fig("iris-scatterplot.pdf")
plt.show()
```
### Boston housing dataset <a class="anchor" id="EDA-boston"></a>
```
# Load data (creates numpy arrays)
boston = sklearn.datasets.load_boston()
X = boston.data
y = boston.target
# Convert to Pandas format
df = pd.DataFrame(X)
df.columns = boston.feature_names
df['MEDV'] = y.tolist()
df.describe()
# plot marginal histograms of each column (13 features, 1 response)
plt.figure()
df.hist()
plt.show()
# scatter plot of response vs each feature
nrows = 3; ncols = 4;
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharey=True, figsize=[15, 10])
plt.tight_layout()
plt.clf()
for i in range(0,12):
plt.subplot(nrows, ncols, i+1)
plt.scatter(X[:,i], y)
plt.xlabel(boston.feature_names[i])
plt.ylabel("house price")
plt.grid()
#save_fig("boston-housing-scatter.pdf")
plt.show()
```
| github_jupyter |
# Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
<img src='notebook_ims/CNN_all_layers.png' height=50% width=50% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
```
### Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a:
* Pooling layer
In the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!
A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
<img src='notebook_ims/maxpooling_ex.png' height=50% width=50% />
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer after a ReLu activation function is applied.
#### ReLU activation
A ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
```
### Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size.
```
# visualize the output of the pooling layer
viz_layer(pooled_layer)
```
| github_jupyter |
```
# default_exp filter
#hide
from nbdev.showdoc import *
#hide
# stellt sicher, dass beim verändern der core library diese wieder neu geladen wird
%load_ext autoreload
%autoreload 2
```
# 01_06_Pivot_BS_Data
In this notebook, we will transform the verticalized data rows of the BalanceSheet into a horizontalized dataframe.
<br>
Currently, our data looks similar to the table below. Every Value is placed on its own row.
| bs_id | company | date | attribute | value |
|-------|------------|------------|-----------|-------|
| 1 | VitaSport | 31.10.2018 | Assets | 100 |
| 1 | VitaSport | 31.10.2018 | Cash | 80 |
| 1 | VitaSport | 31.10.2018 | Other | 20 |
| 2 | VitaSport | 31.10.2019 | Assets | 120 |
| 2 | VitaSport | 31.10.2019 | Cash | 80 |
| 2 | VitaSport | 31.10.2019 | Other | 40 |
| 3 | GloryFood | 31.10.2019 | Assets | 50 |
| 3 | GloryFood | 31.10.2019 | Cash | 5 |
| 3 | GloryFood | 31.10.2019 | Other | 45 |
<br>
But what we would like to have one entry per BalanceSheet:
| bs_id | company | date | Assets | Cash | Other |
|-------|-----------|------------|--------|------|-------|
| 1 | VitaSport | 31.10.2018 | 100 | 80 | 20 |
| 2 | VitaSport | 31.10.2019 | 120 | 80 | 40 |
| 3 | GloryFood | 31.10.2019 | 50 | 5 | 45 |
```
# imports
from bfh_cas_bgd_fs2020_sa.core import * # initialze spark
from pathlib import Path
from typing import List, Tuple, Union, Set
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.functions import col, pandas_udf, PandasUDFType
from pyspark.sql.types import *
import pandas as pd
import shutil # provides high level file operations
import time # used to measure execution time
import os
import sys
# folder with our test-dataset which contains only data from two zip files
tst_filtered_folder = "./tmp/filtered/"
tst_bs_folder = "./tmp/bs/"
# folder with the whole dataset as a single parquet
all_filtered_folder = "D:/data/parq_filtered"
all_bs_folder = "D:/data/parq_bs"
```
## Init Spark
```
spark = get_spark_session() # Session anlegen
spark # display the most important information of the session
```
## Load the dataset
Loading the data doesn't really do anything. It just prepares the df. But we well use the cache() method to keep the data in memory, once it is loaded for the first time.
### Load the test data
```
df_tst = spark.read.parquet(tst_filtered_folder).cache()
```
### Load the whole dataset
```
df_all = spark.read.parquet(all_filtered_folder).cache()
```
### Print all the contained column names
```
_ = [print(x, end=", ") for x in df_all.columns] # print the name of the columns for convenience
```
## Loading data into memory
We just make a count on the test and the all dataset. This ensure that the data will be loaded into the memory and is cached afterwards.
```
start = time.time()
print("Entries in Test: ", "{:_}".format(df_tst.count())) # loading test dataset into memory
duration = time.time() - start
print("duration: ", duration)
start = time.time()
print("Entries in Test: ", "{:_}".format(df_all.count())) # loading all dataset into memory
duration = time.time() - start
print("duration: ", duration)
```
Since we filtered out about two thirds of the entries, loading the reduced data set takes only about 3 minutes to load it completely into memory
## Basics
In order to test how to pivot the data, we implement a simple example to test the principle. Actually, der is a pivot function, which provides the desired functionality.
```
df_bs_data = spark.createDataFrame( \
[ \
(1,"VitaSport","31.10.2018","Assets",100), \
(1,"VitaSport","31.10.2018","Cash ",80 ), \
(1,"VitaSport","31.10.2018","Other ",20 ), \
(2,"VitaSport","31.10.2019","Assets",120), \
(2,"VitaSport","31.10.2019","Cash ",80 ), \
(2,"VitaSport","31.10.2019","Other ",40 ), \
(3,"GloryFood","31.10.2019","Assets",50 ), \
(3,"GloryFood","31.10.2019","Cash ",5 ), \
(3,"GloryFood","31.10.2019","Other ",45 ) \
], \
("bs_id", "company", "date", "attribute", "value") \
)
df_bs_data.groupby(["company","bs_id","date"]).pivot("attribute").max("value").show()
```
This looks simple. But it could be, that we will get more than one result. In the above sample, we just used the max aggregate function. However, that might be a too simple solution for real data.
## Pivoting Apple in the Testdata
In a first step, we select only the BalanceSheet data of Apple in the testset and we expect to have 2 BalanceSheets in there (one for every quarter - since the testset contains two quarter of data.
```
apple_df = df_tst.where("cik == 320193 and stmt = 'BS'").cache()
```
Check how many datarows there are for Apple in the two test quarters.
```
apple_df.count()
apple_vip_cols = apple_df.select(['cik','adsh','period','tag', 'version', 'ddate','uom','value', 'qtrs','fp', 'report','line'])
apple_vip_cols.show()
```
Checking the "ddate" column, we see entries that are in the past (compared to the "period" field - which is the Balance Sheet Date, rounded to nearest month-end). This is normal, since the balancesheet also contains the data of the balance sheet from a year ago. However, in our case we are only interested in the data for the actual period. These are the entries where period and ddate have the same value.
```
apple_bs_per_period = apple_vip_cols.where("period == ddate").orderBy(["cik","adsh","period","report","line"])
apple_bs_per_period.show(32)
```
Comparing the data above with the BalanceSheet in the appropriate report (https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019319000076/a10-qq320196292019.htm) we see that the data and entries match.
Finally, we pivot the data and we expect two rows in the data.
```
apple_pivoted_df = apple_bs_per_period.select(["cik","adsh","period","ddate",'tag','value']) \
.groupby(["cik","adsh","period","ddate"]) \
.pivot("tag",['Assets','AssetsCurrent','OtherAssetsCurrent']).max('value')
apple_pivoted_df.select(["cik","adsh","period",'ddate', 'Assets','AssetsCurrent','OtherAssetsCurrent']).show()
```
The result looks promising.
## Deciding which tags to pivot
In the analysis step we created a sorted list of the tags that are present in BalanceSheets. As was shown there, it doesn't make sense to pivot all 3400 tags. Instead, only a small subset appears often enough in reports to be useful. <br>
We stored the sorted list in the file "bs_tags.csv". No, we will load it and use the first 100 tags to define which values should be pivoted.
```
bs_tags = pd.read_csv("./bs_tags.csv")['tag']
relevant_tags = bs_tags[:100].tolist()
```
## Pivoting
### Pivot the testset
```
df_test_bs_ready = df_tst.where("stmt = 'BS' and period == ddate").select(['cik','ticker','adsh','period','tag', 'ddate','value']).cache()
df_test_bs_ready.count()
df_test_bs_ready.select('tag').distinct().count()
df_test_bs_pivot = df_test_bs_ready.groupby(["cik","adsh","period","ddate"]).pivot("tag",relevant_tags) \
.max('value').cache()
df_test_bs_pivot.count()
df_test_bs_pivot.write.parquet(tst_bs_folder)
```
### Pivot the whole dataset
```
df_all_bs_ready = df_all.where("stmt = 'BS' and period == ddate").select(['cik','ticker','adsh','form','period','tag','value']).cache()
df_all_bs_ready.count()
df_all_bs_ready.select('tag').distinct().count()
df_all_bs_pivot = df_all_bs_ready.groupby(["cik","ticker","adsh","form","period"]).pivot("tag",relevant_tags) \
.max('value').cache()
df_all_bs_pivot.count()
```
In order to have an easy way to look at the data with a texteditor, we convert it to a pandas Dataframe and store it as CSV. The resulting file size is now 54 MB.
```
df_all_bs_pivot.toPandas().to_csv("bs_data.csv",index=False,header=True)
```
But for further processing, we also store it as parquet, since this will keep that datatype information of the columns.
```
shutil.rmtree(all_bs_folder, ignore_errors=True)
df_all_bs_pivot.repartition(8,col("cik")).write.parquet(all_bs_folder)
```
## Stop the SparkContext
```
spark.stop()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import random
#using the monte carlo method to approximate the value of pi
N_array = np.arange(1,5000)
pi_array = []
x_array_points = []
y_array_points = []
for n in N_array:
num_in = 0
num_out = 0
for i in range(n):
x = random.uniform(0,1)
y = random.uniform(0,1)
if (x**2) + (y**2) <= 1:
num_in +=1
else:
num_out += 1
pi_array.append(4*num_in/n)
x_array_points.append(x)
y_array_points.append(y)
plt.style.use('ggplot')
import pandas as pd
dict_ = {"N":N_array , "PI":pi_array }
df = pd.DataFrame(dict_)
df.dropna()
df["PI"].mean()
plt.figure(figsize=(50,10))
plt.plot(N_array , pi_array)
plt.title("pi vs n")
x = np.arange(0,1,0.01)
y = np.sqrt(1-x**2)
plt.figure(figsize=(30,10))
plt.plot(x,y,'b')
plt.scatter(x_array_points,y_array_points)
plt.title("random points lying inside and outside the quarter circle")
import random
'''
finding area of a curve using monte carlo method
f(x) = x*(1-x)
'''
#plotting the curve
y_array = []
x_array = []
for i in np.arange(0,1,0.01):
y = i*(i-1)
x_array.append(i)
y_array.append(y)
plt.figure(figsize=(30,10))
plt.plot(x_array , y_array,'b')
N = 10000 #number of random points
x_in = []
x_out = []
y_in = []
y_out = []
num_points = 0
Area_rect = -0.25
for i in range(N):
x = random.uniform(0,1)
y = random.uniform(0 , min(y_array))
x_array.append(x)
y_array.append(y)
y_actual = x*(x-1)
if y < y_actual:
num_points+=1
x_in.append(x)
y_in.append(y)
else:
x_out.append(x)
y_out.append(y)
Area = 0.26*num_points/N
print(Area)
plt.scatter(x_in , y_in , color = 'g')
plt.scatter(x_out, y_out , color = 'r')
plt.show()
'''
function that finds monte carlo integral of any function
'''
def MonteCarlo(func , limits , N): #function , upper and lower limits , number of random points
a , b = limits[0] , limits[1]
integral = 0
for i in range(N):
x_rand = random.uniform(a , b)
#print(x_rand)
y = func(x_rand)
#print(y)
integral+=y
area = (b-a)/float(N)*integral
print("Area of function between {} {} is {}".format(a , b , area))
#testing monte carlo method function
MonteCarlo(np.sin, [0,np.pi],100000) #integral of sinx between 0 and pi
def func(x):
return x**2
MonteCarlo(func , [0,2] , 1000)
```
| github_jupyter |
Analysing GPS data from Jaume University
Defining functions
```
import json
import math
import numpy as np
import matplotlib.pyplot as plt
def getmeasurementTimestamp(item):
return int(item['measurementTimestamp'])
def getProcessingTimestamp(item):
return int(item['processingTimestamp'])
def get_x_error(item): #the error in the data is the stdev of the sample, we compute the error of the estimation (the sample mean)
return item['value']['averagecoordinate']['error']['coordinates'][0]/math.sqrt(item['value']['trackeeHistory']['nMeasurements'])
def get_y_error(item):
return item['value']['averagecoordinate']['error']['coordinates'][1]/math.sqrt(item['value']['trackeeHistory']['nMeasurements'])
def get_fitted(item):
return item['value']['trackeeHistory']['fitStatus']
def get_x_sample_error(item):
return item['value']['averagecoordinate']['error']['coordinates'][0]
def get_y_sample_error(item):
return item['value']['averagecoordinate']['error']['coordinates'][1]
def get_probChi2(item):
return item['value']['trackeeHistory']['probChi2']
def get_Chi2PerDof(item):
return item['value']['trackeeHistory']['chi2PerDof']
def plotHistogramOfDictionary(dictionary, xlabel, ylabel, nbins):
dictionaryList = []
for address in dictionary.keys():
dictionaryList.append(dictionary[address])
dictArray = np.array(dictionaryList)
plt.hist(dictArray, bins = nbins)
plt.ylabel(ylabel)
plt.xlabel(xlabel)
axes = plt.gca()
plt.show()
def getX(line):
coordinates = line["value"]["averagecoordinate"]["avg"]["coordinates"]
return coordinates[0]
def getY(line):
coordinates = line["value"]["averagecoordinate"]["avg"]["coordinates"]
return coordinates[1]
```
Reading GPS data
```
for i in [0]:
data = []
with open("F:/ArenaData/Fingerprinting/fingerprints_GPS.json") as f:
data = f.readlines()
json_lines = []
mac_adresses = []
for line in data:
jsline = json.loads(line)
jsline["measurementTimestamp"]/=1000
json_lines.append(jsline)#now json_lines contains all lines of data
mac_adresses.append(jsline["value"]["sourceMac"]) # mac_addresses is a list of address per line
#sorting by time
json_lines.sort(key = getmeasurementTimestamp) # now json_lines is sorted by time
```
Computation of error etc
```
minTime = getmeasurementTimestamp(json_lines[0])
maxTime = getmeasurementTimestamp(json_lines[len(json_lines) - 1])
print("minTime="+ str(minTime))
print("maxTime="+ str(maxTime))
timeMinutes = (maxTime - minTime)/60
print("time in seconds = ", str(maxTime - minTime))
print("time in hours = ", str(timeMinutes/60))
```
Computing FirstTimeSeen and LastTimeSeen for every address
```
concertFinishedTimestamp=maxTime
FirstTimeSeen=dict()
LastTimeSeen=dict()
for jsline in json_lines:
address = jsline["value"]["sourceMac"]
time = getmeasurementTimestamp(jsline)
if address in FirstTimeSeen:
if time < FirstTimeSeen[address]:
FirstTimeSeen[address] = time
else:
FirstTimeSeen[address] = time
if address in LastTimeSeen:
if time > LastTimeSeen[address]:
LastTimeSeen[address] = time
else:
LastTimeSeen[address] = time
```
Computing dwell time, number of persistent addresses, and number of addresses visible in every second
```
DwellTime = dict()
DwellTimeDuringConcert = dict()
numberOfAdressesAtConcert=0
for address in LastTimeSeen.keys():
DwellTime[address] = int((LastTimeSeen[address] - FirstTimeSeen[address]) /60) # in minutes
if LastTimeSeen[address] <= concertFinishedTimestamp:
numberOfAdressesAtConcert += 1
DwellTimeDuringConcert[address] = DwellTime[address]
print('number of addresses detected during all hours:')
print(numberOfAdressesAtConcert)
longTermAddresses=[]
numberOfAddresses = []
AddressesInSec = []
for jsline in json_lines:
sec = int(math.floor((getmeasurementTimestamp(jsline)- minTime)))
#print(str(sec))
address = jsline["value"]["sourceMac"]
if DwellTime[address] > 0:# i.e. >1 minutes
longTermAddresses.append(address)
if len(AddressesInSec) <= sec:
while len(AddressesInSec) <= sec:
AddressesInSec.append([])
AddressesInSec[sec].append(address)
else:
if address not in AddressesInSec[sec]:
AddressesInSec[sec].append(address)
for setje in AddressesInSec:
numberOfAddresses.append(len(setje))
longTermCount = len(set(longTermAddresses))
print("Long term persistent addresses (>=1 min)= " + str(longTermCount))
average = 0
maxN = 0
for addresses in AddressesInSec:
average += len(addresses)
maxN = max(len(addresses), maxN)
average /= len(AddressesInSec)
print("average Number Of Addresses Per Second " + str(average))
print(maxN)
plotHistogramOfDictionary(DwellTimeDuringConcert, "dwell time(minutes)", "number of addresses", 30)
```
Drawing how many addresses per second are visible
```
import matplotlib.pyplot as plt
plt.plot(numberOfAddresses)
plt.ylabel('addresses present')
plt.xlabel('sec')
axes = plt.gca()
axes.set_xlim([1804000,1817000])
axes.set_ylim([0,max(numberOfAddresses)])
plt.show()
```
Just a dictionary that indicates for every address whether it is randomized or not
```
Randomized = dict()
PersistentRandomized = dict()
count0 = 0
count1 = 0
for line in json_lines:
address = line["value"]["sourceMac"]
if line["value"]["trackeeHistory"]["localMac"] == 1 :
count1 +=1
Randomized[address] = 1
if DwellTime[address] > 10:
if LastTimeSeen[address] < concertFinishedTimestamp:
PersistentRandomized[address] = 1
else:
count0 +=1
Randomized[address] = 0
if DwellTime[address] > 10:
if LastTimeSeen[address] < concertFinishedTimestamp:
PersistentRandomized[address] = 0
zeros=0
ones=0
zerosPersistent = 0
onesPersistent = 0
for key in Randomized.keys():
if Randomized[key]==0:
zeros +=1
else:
ones +=1
for key in PersistentRandomized.keys():
if PersistentRandomized[key]==0:
zerosPersistent +=1
else:
onesPersistent +=1
print("total number of lines with localMac == 1: " + str(count1))
print("total number of lines with localMac == 0: " + str(count0))
print("total number of addresses with localMac == 1: " + str(ones))
print("total number of addresses with localMac == 0: " + str(zeros))
print("total number of persistent addresses with localMac == 1: " + str(onesPersistent))
print("total number of persistent addresses with localMac == 0: " + str(zerosPersistent))
DwellTimeConcertRandomized = dict()
DwellTimeConcertNonRandomized = dict()
for key in DwellTimeDuringConcert.keys():
if Randomized[key] == 0:
DwellTimeConcertNonRandomized[key] = DwellTimeDuringConcert[key]
else:
DwellTimeConcertRandomized[key] = DwellTimeDuringConcert[key]
plotHistogramOfDictionary(DwellTimeConcertNonRandomized, "dwell time(minutes) of non-randomized addresses", "number of addresses", 50)
```
Computes visible addresses per time_interval with a specified localMac tag, time_interval in seconds
```
# checking the trajectory of a certain address, error used is sample error (not error of the estimation of the mean)
lines = []#will contain only the lines for that address
for line in json_lines:
address = line["value"]["sourceMac"]
if (math.floor(address) == 11):
lines.append(line)
x_coord = []
y_coord = []
times = []
x_errors = []
y_errors = []
for line in lines:
d=9
if True :
coordinates = line["value"]["averagecoordinate"]["avg"]["coordinates"]
time = math.floor(line["measurementTimestamp"])-minTime
if time not in times:
x_coord.append( coordinates[0])
y_coord.append(coordinates[1])
x_errors.append(0.1)# for a 95% confidence
y_errors.append(0.1)# for a 95% confidence
times.append(time)
print(len(x_errors))
print(len(x_coord))
#drawing hte x and y coordinates
import numpy as np
import matplotlib.pyplot as plt
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True)
#ax0.errorbar(times, x_coord, yerr=x_errors)
ax0.errorbar(times, x_coord, yerr=x_errors, fmt='o')
ax0.set_title('x-coordinate ')
plt.xlabel('time(sec)')
#ax1.errorbar(times, y_coord, yerr=y_errors)
ax1.errorbar(times, y_coord, yerr=y_errors, fmt='o')
ax1.set_title('y-coordinate ')
plt.show()
coor = dict()
count =0
for y in y_coord:
key = times[count]
coor[key] = y
count +=1
plotHistogramOfDictionary(coor, 'y_coord', 'freq', 200)
13000/60
216/60
def get_coords(phone):
# checking the trajectory of a certain address, error used is sample error (not error of the estimation of the mean)
lines = []#will contain only the lines for that address
for line in json_lines:
address = line["value"]["sourceMac"]
#if (address == '27e573c8-1640-4ea8-86d8-0733c800e9cd'):#this is the address we are checking for, non-randomized
if (math.floor(address) == phone):
#if (address == '8b8a2356-d11e-4bd5-bb35-d8370bf48b1e'):#randomized address
lines.append(line)
x_coord = []
y_coord = []
times = []
x_errors = []
y_errors = []
for line in lines:
d=9
if True :
coordinates = line["value"]["averagecoordinate"]["avg"]["coordinates"]
time = math.floor(line["measurementTimestamp"])-minTime
if time not in times and time >=1804000 and time <= 1817000:
#if time not in times and time >=1811000 and time <= 1812000:
x_coord.append( coordinates[0])
y_coord.append(coordinates[1])
x_errors.append(0.1)# for a 95% confidence
y_errors.append(0.1)# for a 95% confidence
times.append(time)
#print(len(x_errors))
#print(len(x_coord))
return (x_coord, y_coord)
```
#### Plotting the GPS trajectories of the detected phones
```
for address in range(0,26):
(x, y) = get_coords(address)
#print(y)
if len(x) > 0:
#print(address)
plt.plot(x,y)
#plt.xlim(200, 320)
#plt.ylim(20,120)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
for address in range(0,26):
(x, y) = get_coords(address)
plt.plot(x,y)
print(address)
plt.show()
```
| github_jupyter |
# Face Generation
In this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!
The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise.
### Get the Data
You'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.
This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training.
### Pre-processed Data
Since the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.
<img src='assets/processed_face_data.png' width=60% />
> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)
This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
```
# can comment out after executing
#!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
```
## Visualize the CelebA Data
The [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)#RGB_Images) each.
### Pre-process and Load the Data
Since the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.
> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**.
#### Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:
* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.
* Your function should return a DataLoader that shuffles and batches these Tensor images.
#### ImageFolder
To create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
```
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
transform = transforms.Compose([transforms.Resize(image_size), # resize to 128x128
transforms.ToTensor()])
dataset = datasets.ImageFolder(data_dir, transform)
loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True)
return loader
```
## Create a DataLoader
#### Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.
Call the above function and create a dataloader to view images.
* You can decide on any reasonable `batch_size` parameter
* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
```
# Define function hyperparameters
batch_size = 128
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
```
Next, you can view some images! You should seen square images of somewhat-centered faces.
Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
```
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
```
#### Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1
You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
```
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
x = x * (max - min) + min
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful.
#### Exercise: Complete the Discriminator class
* The inputs to the discriminator are 32x32x3 tensor images
* The output should be a single value that will indicate whether a given image is real or fake
```
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a convolutional layer, with optional batch normalization.
"""
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
self.conv_dim = conv_dim
# [3, 32, 32] input
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # first layer, no batch_norm
# [10, 16, 16] input
self.conv2 = conv(conv_dim, conv_dim*2, 4)
# [20, 8, 8] input
self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
# [40, 4, 4] input
self.conv4 = conv(conv_dim*4, conv_dim*8, 1, padding=0, stride=1)
# [80, 4, 4] output
self.conv5 = conv(conv_dim*8, conv_dim*16, 1, padding=0, stride=1)
# [160, 4, 4] output
self.out_dim = self.conv_dim *16*4*4
# final, fully-connected layer
self.fc = nn.Linear(self.out_dim, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
#print(x.shape)
x = F.leaky_relu(self.conv1(x))
#print(x.shape)
x = F.leaky_relu(self.conv2(x))
#print(x.shape)
x = F.leaky_relu(self.conv3(x))
#print(x.shape)
x = F.leaky_relu(self.conv4(x))
#print(x.shape)
x = F.leaky_relu(self.conv5(x))
#print(x.shape)
x = x.view(-1,self.out_dim)
#print(x.shape)
x = self.fc(x)
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
```
## Generator
The generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs.
#### Exercise: Complete the Generator class
* The inputs to the generator are vectors of some length `z_size`
* The output should be a image of shape `32x32x3`
```
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a transposed-convolutional layer, with optional batch normalization.
"""
## TODO: Complete this function
## create a sequence of transpose + optional batch norm layers
layers = []
transpose_conv_layer = nn.ConvTranspose2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append transpose convolutional layer
layers.append(transpose_conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.out_dim = self.conv_dim *16*4*4
# first, fully-connected layer
self.fc = nn.Linear(z_size, self.out_dim)
# transpose conv layers
#[160, 4, 4] input
self.dconv1 = deconv(conv_dim*16, conv_dim*8, 1, padding=0, stride=1)
#[80, 4, 4] input
self.dconv2 = deconv(conv_dim*8, conv_dim*4, 1, padding=0, stride=1)
#[40, 4, 4] input
self.dconv3 = deconv(conv_dim*4, conv_dim*2, 4)
#[20, 8, 8] input
self.dconv4 = deconv(conv_dim*2, conv_dim, 4)
#[10, 16, 16] input
self.dconv5 = deconv(conv_dim, 3, 4, batch_norm=False)
#[3, 32, 32] output
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
#print(x.shape)
x = self.fc(x)
#print(x.shape)
x = x.view(-1, self.conv_dim*16, 4, 4)
#print(x.shape)
x = F.relu(self.dconv1(x))
#print(x.shape)
x = F.relu(self.dconv2(x))
#print(x.shape)
x = F.relu(self.dconv3(x))
#print(x.shape)
x = F.relu(self.dconv4(x))
#print(x.shape)
x = self.dconv5(x)
#print(x.shape)
x = F.tanh(x)
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
```
## Initialize the weights of your networks
To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:
> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.
So, your next task will be to define a weight initialization function that does just this!
You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function.
#### Exercise: Complete the weight initialization function
* This should initialize only **convolutional** and **linear** layers
* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
* The bias terms, if they exist, may be left alone or set to 0.
```
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
m.weight.data.normal_(mean=0,std=0.02)
if hasattr(m, 'bias') and m.bias is not None:
m.bias.data.fill_(0)
```
## Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
```
#### Exercise: Define model hyperparameters
```
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
```
### Training on GPU
Check if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that
>* Models,
* Model inputs, and
* Loss function arguments
Are moved to GPU, where appropriate.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses for both types of adversarial networks.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*.
#### Exercise: Complete real and fake loss functions
**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
```
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
batch_size = D_out.size(0)
labels = torch.ones(batch_size)
# move labels to GPU if available
if train_on_gpu:
labels = labels.cuda()
# binary cross entropy with logits loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
```
## Optimizers
#### Exercise: Define optimizers for your Discriminator (D) and Generator (G)
Define optimizers for your models with appropriate hyperparameters.
```
import torch.optim as optim
lr = .0002
beta1=0.5
beta2=0.999
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
```
---
## Training
Training will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.
* You should train the discriminator by alternating on real and fake images
* Then the generator, which tries to trick the discriminator and should have an opposing loss function
#### Saving Samples
You've been given some code to print out some loss statistics and save some generated "fake" samples.
#### Exercise: Complete the training function
Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
```
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
d_optimizer.zero_grad()
# Compute the discriminator losses on real images
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
# move x to GPU, if available
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
```
Set your number of training epochs and train your GAN!
```
# set number of epochs
n_epochs = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
```
## Training loss
Plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
```
### Question: What do you notice about your generated samples and how might you improve this model?
When you answer this question, consider the following factors:
* The dataset is biased; it is made of "celebrity" faces that are mostly white
* Model size; larger models have the opportunity to learn more features in a data feature space
* Optimization strategy; optimizers and number of epochs affect your final result
**Answer:** (Write your answer in this cell)
### Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "problem_unittests.py" files in your submission.
| github_jupyter |
```
%matplotlib inline
import pymc3 as pm
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
palette = 'muted'
sns.set_palette(palette); sns.set_color_codes(palette)
np.set_printoptions(precision=2)
```
# Simple example
```
clusters = 3
n_cluster = [90, 50, 75]
n_total = sum(n_cluster)
means = [9, 21, 35]
std_devs = [2, 2, 2]
# example of mixture data
mix = np.random.normal(np.repeat(means, n_cluster), np.repeat(std_devs, n_cluster))
sns.kdeplot(np.array(mix))
plt.xlabel('$x$', fontsize=14);
# Author: Thomas Boggs
import matplotlib.tri as tri
from functools import reduce
from matplotlib import ticker, cm
_corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
_triangle = tri.Triangulation(_corners[:, 0], _corners[:, 1])
_midpoints = [(_corners[(i + 1) % 3] + _corners[(i + 2) % 3]) / 2.0 for i in range(3)]
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.
Arguments:
xy: A length-2 sequence containing the x and y value.
'''
s = [(_corners[i] - _midpoints[i]).dot(xy - _midpoints[i]) / 0.75 for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
class Dirichlet(object):
def __init__(self, alpha):
'''Creates Dirichlet distribution with parameter `alpha`.'''
from math import gamma
from operator import mul
self._alpha = np.array(alpha)
self._coef = gamma(np.sum(self._alpha)) /reduce(mul, [gamma(a) for a in self._alpha])
def pdf(self, x):
'''Returns pdf value for `x`.'''
from operator import mul
return self._coef * reduce(mul, [xx ** (aa - 1)
for (xx, aa)in zip(x, self._alpha)])
def sample(self, N):
'''Generates a random sample of size `N`.'''
return np.random.dirichlet(self._alpha, N)
def draw_pdf_contours(dist, nlevels=100, subdiv=8, **kwargs):
'''Draws pdf contours over an equilateral triangle (2-simplex).
Arguments:
dist: A distribution instance with a `pdf` method.
border (bool): If True, the simplex border is drawn.
nlevels (int): Number of contours to draw.
subdiv (int): Number of recursive mesh subdivisions to create.
kwargs: Keyword args passed on to `plt.triplot`.
'''
refiner = tri.UniformTriRefiner(_triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]
plt.tricontourf(trimesh, pvals, nlevels, cmap=cm.Blues, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
alphas = [[0.5] * 3, [1] * 3, [10] * 3, [2, 5, 10]]
for (i, alpha) in enumerate(alphas):
plt.subplot(2, 2, i + 1)
dist = Dirichlet(alpha)
draw_pdf_contours(dist)
plt.title(r'$\alpha$ = ({:.1f}, {:.1f}, {:.1f})'.format(*alpha), fontsize=16)
with pm.Model() as model_kg:
# Each observation is assigned to a cluster/component with probability p
p = pm.Dirichlet('p', a=np.ones(clusters))
category = pm.Categorical('category', p=p, shape=n_total)
# Known Gaussians means
means = pm.math.constant([10, 20, 35])
y = pm.Normal('y', mu=means[category], sd=2, observed=mix)
step1 = pm.ElemwiseCategorical(vars=[category], values=range(clusters))
## The CategoricalGibbsMetropolis is a recent addition to PyMC3
## I have not find the time yet to experiment with it.
#step1 = pm.CategoricalGibbsMet\ropolis(vars=[category])
step2 = pm.Metropolis(vars=[p])
trace_kg = pm.sample(10000, step=[step1, step2],chains=1,njobs=1)
chain_kg = trace_kg[1000:]
varnames_kg = ['p']
pm.traceplot(chain_kg, varnames_kg);
print(pm.summary(chain_kg, varnames_kg))
n_cluster_array = np.array(n_cluster)
print('\n')
print('Actual values of cluster fractions: {}'.format(n_cluster_array/n_cluster_array.sum()))
with pm.Model() as model_ug:
# Each observation is assigned to a cluster/component with probability p
p = pm.Dirichlet('p', a=np.ones(clusters))
category = pm.Categorical('category', p=p, shape=n_total)
# We estimate the unknown gaussians means and standard deviation
means = pm.Normal('means', mu=[10, 20, 35], sd=2, shape=clusters)
sd = pm.HalfCauchy('sd', 5)
y = pm.Normal('y', mu=means[category], sd=sd, observed=mix)
step1 = pm.ElemwiseCategorical(vars=[category], values=range(clusters))
step2 = pm.Metropolis(vars=[means, sd, p])
trace_ug = pm.sample(10000, step=[step1, step2],chains=1,njobs=1)
chain_ug = trace_ug[1000:]
varnames_ug = ['means', 'sd', 'p']
pm.traceplot(chain_ug, varnames_ug);
print(pm.summary(chain_ug, varnames_ug))
means_array = np.array([9, 21, 35])
std_devs = [2, 2, 2]
print('\n')
print('Actual values of cluster fractions: {}'.format(n_cluster_array/n_cluster_array.sum()))
print('Actual values of cluster means: {}'.format(means_array))
print('Actual values of cluster sd: {}'.format(std_devs))
ppc = pm.sample_ppc(chain_ug, 50, model_ug)
for i in ppc['y']:
sns.kdeplot(i, alpha=0.1, color='b')
sns.kdeplot(np.array(mix), lw=2, color='k');
plt.xlabel('$x$', fontsize=14);
```
#### Note the higher uncertainty where the data overlap and the reduced uncertainty at the high/low limits
## Marginalized Gaussian Mixture model
### In the previous models we have explicitly defined the latent variable $z$ in the model. This is ineffective in terms of sampling. PyMC3 offers the ability to model the outcome conditionaly on $z$ as $p(y|z,\theta)$ and marginalise it to get $p(y|\theta)$
```
with pm.Model() as model_mg:
p = pm.Dirichlet('p', a=np.ones(clusters))
means = pm.Normal('means', mu=[10, 20, 35], sd=2, shape=clusters)
sd = pm.HalfCauchy('sd', 5)
y = pm.NormalMixture('y', w=p, mu=means, sd=sd, observed=mix)
trace_mg = pm.sample(5000, chains=1,njobs=1)
chain_mg = trace_mg[500:]
varnames_mg = ['means', 'sd', 'p']
pm.traceplot(chain_mg, varnames_mg);
```
## Zero inflated Poisson model
```
lam_params = [0.5, 1.5, 3, 8]
k = np.arange(0, max(lam_params) * 3)
for lam in lam_params:
y = stats.poisson(lam).pmf(k)
plt.plot(k, y, 'o-', label="$\\lambda$ = {:3.1f}".format(lam))
plt.legend();
plt.xlabel('$k$', fontsize=14);
plt.ylabel('$pmf(k)$', fontsize=14);
np.random.seed(42)
n = 100
lam_true = 2.5 # Poisson rate
pi = 0.2 # probability of extra-zeros (pi = 1-psi)
# Simulate some data
counts = np.array([(np.random.random() > pi) * np.random.poisson(lam_true) for i in range(n)])
plt.hist(counts, bins=30);
with pm.Model() as ZIP:
psi = pm.Beta('psi', 1, 1)
lam = pm.Gamma('lam', 2, 0.1)
y_pred = pm.ZeroInflatedPoisson('y_pred', theta=lam, psi=psi, observed=counts)
trace_ZIP = pm.sample(5000,chains=1,njobs=1)
chain_ZIP = trace_ZIP[100:]
pm.traceplot(chain_ZIP);
pm.summary(chain_ZIP)
```
## Zero inflated Poisson regression
```
#Kruschke plot
fish_data = pd.read_csv('fish.csv')
fish_data.head()
plt.hist(fish_data['count'], bins=20, normed=True);
with pm.Model() as ZIP_reg:
psi = pm.Beta('psi', 1, 1)
alpha = pm.Normal('alpha', 0, 10)
beta = pm.Normal('beta', 0, 10, shape=2)
lam = pm.math.exp(alpha + beta[0] * fish_data['child'] + beta[1] * fish_data['camper'])
y = pm.ZeroInflatedPoisson('y', theta=lam, psi=psi, observed=fish_data['count'])
trace_ZIP_reg = pm.sample(2000,chains=1,njobs=1)
chain_ZIP_reg = trace_ZIP_reg[100:]
pm.traceplot(chain_ZIP_reg);
pm.summary(chain_ZIP_reg)
children = [0, 1, 2, 3, 4]
fish_count_pred_0 = []
fish_count_pred_1 = []
thin = 5
# calculate the expectation lambda with and withour a camper for difference number of children
# note lambda from the model is exp(a+bX)
for n in children:
without_camper = chain_ZIP_reg['alpha'][::thin] + chain_ZIP_reg['beta'][:,0][::thin] * n
with_camper = without_camper + chain_ZIP_reg['beta'][:,1][::thin]
fish_count_pred_0.append(np.exp(without_camper))
fish_count_pred_1.append(np.exp(with_camper))
plt.plot(children, fish_count_pred_0, 'bo', alpha=0.01)
plt.plot(children, fish_count_pred_1, 'ro', alpha=0.01)
plt.xticks(children);
plt.xlabel('Number of children', fontsize=14)
plt.ylabel('Fish caught', fontsize=14)
plt.plot([], 'bo', label='without camper')
plt.plot([], 'ro', label='with camper')
plt.legend(fontsize=14);
```
## Robust logistic Regression
```
iris = sns.load_dataset("iris")
df = iris.query("species == ('setosa', 'versicolor')")
y_0 = pd.Categorical(df['species']).codes
x_n = 'sepal_length'
x_0 = df[x_n].values
# contaminate our set with ones with unusually small sepal length
y_0 = np.concatenate((y_0, np.ones(6)))
x_0 = np.concatenate((x_0, [4.2, 4.5, 4.0, 4.3, 4.2, 4.4]))
x_0_m = x_0 - x_0.mean()
plt.plot(x_0, y_0, 'o', color='k');
with pm.Model() as model_rlg:
alpha_tmp = pm.Normal('alpha_tmp', mu=0, sd=100)
beta = pm.Normal('beta', mu=0, sd=10)
mu = alpha_tmp + beta * x_0_m
theta = pm.Deterministic('theta', 1 / (1 + pm.math.exp(-mu)))
# add the mixture here as a combination of the logistic derived theta
# and a random pi from a Beta distribution
pi = pm.Beta('pi', 1, 1)
p = pi * 0.5 + (1 - pi) * theta
# correct alpha from centering
alpha = pm.Deterministic('alpha', alpha_tmp - beta * x_0.mean())
bd = pm.Deterministic('bd', -alpha/beta)
yl = pm.Bernoulli('yl', p=p, observed=y_0)
trace_rlg = pm.sample(2000, njobs=1,chains=1)
varnames = ['alpha', 'beta', 'bd', 'pi']
pm.traceplot(trace_rlg, varnames);
pm.summary(trace_rlg, varnames)
theta = trace_rlg['theta'].mean(axis=0)
idx = np.argsort(x_0)
plt.plot(x_0[idx], theta[idx], color='b', lw=3);
plt.axvline(trace_rlg['bd'].mean(), ymax=1, color='r')
bd_hpd = pm.hpd(trace_rlg['bd'])
plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='r', alpha=0.5)
plt.plot(x_0, y_0, 'o', color='k');
theta_hpd = pm.hpd(trace_rlg['theta'])[idx]
plt.fill_between(x_0[idx], theta_hpd[:,0], theta_hpd[:,1], color='b', alpha=0.5);
plt.xlabel(x_n, fontsize=16);
plt.ylabel('$\\theta$', rotation=0, fontsize=16);
import sys, IPython, scipy, matplotlib, platform
print("This notebook was created on a %s computer running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nMatplotlib %s\nSeaborn %s\nPandas %s" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, matplotlib.__version__, sns.__version__, pd.__version__))
```
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell
# install NeMo
BRANCH = 'v1.0.0b3'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp]
# If you're not using Colab, you might need to upgrade jupyter notebook to avoid the following error:
# 'ImportError: IProgress not found. Please update jupyter and ipywidgets.'
! pip install ipywidgets
! jupyter nbextension enable --py widgetsnbextension
# Please restart the kernel after running this cell
from nemo.collections import nlp as nemo_nlp
from nemo.utils.exp_manager import exp_manager
import os
import wget
import torch
import pytorch_lightning as pl
from omegaconf import OmegaConf
```
In this tutorial, we are going to describe how to finetune a BERT-like model based on [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) on [GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding](https://openreview.net/pdf?id=rJ4km2R5t7).
# GLUE tasks
GLUE Benchmark includes 9 natural language understanding tasks:
## Single-Sentence Tasks
* CoLA - [The Corpus of Linguistic Acceptability](https://arxiv.org/abs/1805.12471) is a set of English sentences from published linguistics literature. The task is to predict whether a given sentence is grammatically correct or not.
* SST-2 - [The Stanford Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence: positive or negative.
## Similarity and Paraphrase tasks
* MRPC - [The Microsoft Research Paraphrase Corpus](https://www.aclweb.org/anthology/I05-5002.pdf) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
* QQP - [The Quora Question Pairs](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
* STS-B - [The Semantic Textual Similarity Benchmark](https://arxiv.org/abs/1708.00055) is a collection of sentence pairs drawn from news headlines, video, and image captions, and natural language inference data. The task is to determine how similar two sentences are.
## Inference Tasks
* MNLI - [The Multi-Genre Natural Language Inference Corpus](https://cims.nyu.edu/~sbowman/multinli/multinli_0.9.pdf) is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The task has the matched (in-domain) and mismatched (cross-domain) sections.
* QNLI - [The Stanford Question Answering Dataset](https://nlp.stanford.edu/pubs/rajpurkar2016squad.pdf) is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question. The task is to determine whether the context sentence contains the answer to the question.
* RTE The Recognizing Textual Entailment (RTE) datasets come from a series of annual [textual entailment challenges](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment). The task is to determine whether the second sentence is the entailment of the first one or not.
* WNLI - The Winograd Schema Challenge is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices (Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. 2012).
All tasks are classification tasks, except for the STS-B task which is a regression task. All classification tasks are 2-class problems, except for the MNLI task which has 3-classes.
More details about GLUE benchmark could be found [here](https://gluebenchmark.com/).
# Datasets
**To proceed further, you need to download the GLUE data.** For example, you can download [this script](https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py) using `wget` and then execute it by running:
`python download_glue_data.py`
use `--tasks TASK` if datasets for only selected GLUE tasks are needed
After running the above commands, you will have a folder `glue_data` with data folders for every GLUE task. For example, data for MRPC task would be under glue_data/MRPC.
This tutorial and [examples/nlp/glue_benchmark/glue_benchmark.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/glue_benchmark/glue_benchmark.py) work with all GLUE tasks without any modifications. For this tutorial, we are going to use MRPC task.
```
# supported task names: ["cola", "sst-2", "mrpc", "sts-b", "qqp", "mnli", "qnli", "rte", "wnli"]
TASK = 'mrpc'
DATA_DIR = 'glue_data/MRPC'
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = 'glue_benchmark_config.yaml'
! ls -l $DATA_DIR
```
For each task, there are 3 files: `train.tsv, dev.tsv, and test.tsv`. Note, MNLI has 2 dev sets: matched and mismatched, evaluation on both dev sets will be done automatically.
```
# let's take a look at the training data
! head -n 5 {DATA_DIR}/train.tsv
```
# Model configuration
Now, let's take a closer look at the model's configuration and learn to train the model.
GLUE model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Sequence Regression module (for STS-B task) or Sequence classifier module (for the rest of the tasks).
The model is defined in a config file which declares multiple important sections. They are:
- **model**: All arguments that are related to the Model - language model, a classifier, optimizer and schedulers, datasets and any other related information
- **trainer**: Any argument to be passed to PyTorch Lightning
```
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/glue_benchmark/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
```
# Model Training
## Setting up Data within the config
Among other things, the config file contains dictionaries called **dataset**, **train_ds** and **validation_ds**. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step.
So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.
Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.
Let's now add the data directory path, task name and output directory for saving predictions to the config.
```
config.model.task_name = TASK
config.model.output_dir = WORK_DIR
config.model.dataset.data_dir = DATA_DIR
```
## Building the PyTorch Lightning Trainer
NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.
Let's first instantiate a Trainer object
```
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 128
trainer = pl.Trainer(**config.trainer)
```
## Setting up a NeMo Experiment
NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
```
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
```
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model and use [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) or [AlBERT model](https://arxiv.org/abs/1909.11942):
```
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use, for example, "megatron-bert-345m-uncased" or 'bert-base-uncased'
PRETRAINED_BERT_MODEL = "albert-base-v1"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
```
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.
Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
```
model = nemo_nlp.models.GLUEModel(cfg=config.model, trainer=trainer)
```
## Monitoring training progress
Optionally, you can create a Tensorboard visualization to monitor training progress.
```
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
```
Note, it’s recommended to finetune the model on each task separately. Also, based on [GLUE Benchmark FAQ#12](https://gluebenchmark.com/faq), there are might be some differences in dev/test distributions for QQP task and in train/dev for WNLI task.
```
# start model training
trainer.fit(model)
```
## Training Script
If you have NeMo installed locally, you can also train the model with [examples/nlp/glue_benchmark/glue_benchmark.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/glue_benchmark/glue_benchmark.py).
To run training script, use:
`python glue_benchmark.py \
model.dataset.data_dir=PATH_TO_DATA_DIR \
model.task_name=TASK`
Average results after 3 runs:
| Task | Metric | ALBERT-large | ALBERT-xlarge | Megatron-345m | BERT base paper | BERT large paper |
|-------|--------------------------|--------------|---------------|---------------|-----------------|------------------|
| CoLA | Matthew's correlation | 54.94 | 61.72 | 64.56 | 52.1 | 60.5 |
| SST-2 | Accuracy | 92.74 | 91.86 | 95.87 | 93.5 | 94.9 |
| MRPC | F1/Accuracy | 92.05/88.97 | 91.87/88.61 | 92.36/89.46 | 88.9/- | 89.3/- |
| STS-B | Person/Spearman corr. | 90.41/90.21 | 90.07/90.10 | 91.51/91.61 | -/85.8 | -/86.5 |
| QQP | F1/Accuracy | 88.26/91.26 | 88.80/91.65 | 89.18/91.91 | 71.2/- | 72.1/- |
| MNLI | Matched /Mismatched acc. | 86.69/86.81 | 88.66/88.73 | 89.86/89.81 | 84.6/83.4 | 86.7/85.9 |
| QNLI | Accuracy | 92.68 | 93.66 | 94.33 | 90.5 | 92.7 |
| RTE | Accuracy | 80.87 | 82.86 | 83.39 | 66.4 | 70.1 |
WNLI task was excluded from the experiments due to the problematic WNLI set.
The dev sets were used for evaluation for ALBERT and Megatron models, and the test sets results for [the BERT paper](https://arxiv.org/abs/1810.04805).
Hyperparameters used to get the results from the above table, could be found in the table below. Some tasks could be further finetuned to improve performance numbers, the tables are for a baseline reference only.
Each cell in the table represents the following parameters:
Number of GPUs used/ Batch Size/ Learning Rate/ Number of Epochs. For not specified parameters, please refer to the default parameters in the training script.
| Task | ALBERT-large | ALBERT-xlarge | Megatron-345m |
|-------|--------------|---------------|---------------|
| CoLA | 1 / 32 / 1e-5 / 3 | 1 / 32 / 1e-5 / 10 | 4 / 16 / 2e-5 / 12 |
| SST-2 | 4 / 16 / 2e-5 / 5 | 4 / 16 / 2e-5 /12 | 4 / 16 / 2e-5 / 12 |
| MRPC | 1 / 32 / 1e-5 / 5 | 1 / 16 / 2e-5 / 5 | 1 / 16 / 2e-5 / 10 |
| STS-B | 1 / 16 / 2e-5 / 5 | 1 / 16 / 4e-5 / 12 | 4 / 16 / 3e-5 / 12 |
| QQP | 1 / 16 / 2e-5 / 5 | 4 / 16 / 1e-5 / 12 | 4 / 16 / 1e-5 / 12 |
| MNLI | 4 / 64 / 1e-5 / 5 | 4 / 32 / 1e-5 / 5 | 4 / 32 / 1e-5 / 5 |
| QNLI | 4 / 16 / 1e-5 / 5 | 4 / 16 / 1e-5 / 5 | 4 / 16 / 2e-5 / 5 |
| RTE | 1 / 16 / 1e-5 / 5 | 1 / 16 / 1e-5 / 12 | 4 / 16 / 3e-5 / 12 |
| github_jupyter |
# 빠른 학습을 위한 tfrecords 데이터셋 생성
- 컴페티션 기본 데이터는 data/public 하위 폴더에 있다고 가정합니다. (train.csv, sample_submission.csv, etc)
- 또한 train.zip, test.zip 역시 data/public 하위에 압축을 풀어놓았다고 가정하고 시작하겠습니다.
```
import os
import os.path as pth
import json
import shutil
import pandas as pd
from tqdm import tqdm
data_base_path = pth.join('data', 'public')
os.makedirs(data_base_path, exist_ok=True)
category_csv_name = 'category.csv'
category_json_name = 'category.json'
submission_csv_name = 'sample_submisstion.csv'
train_csv_name = 'train.csv'
train_zip_name = 'train.zip'
test_zip_name = 'test.zip'
```
일단 모든 jpg 파일을 한 경로에 놓고 작업하기 편하게 하는 방식입니다.
파일이 많다보니 파일 옮기는 작업을 쉘 한줄로 하려니 명령어가 너무 길어져 오류가 발생힙니다.
조금 번거롭더라도 하나씩 가져와서 한 경로 이하에 놓도록 하였습니다.
```
train_data_path = pth.join(data_base_path, 'train')
test_data_path = pth.join(data_base_path, 'test')
if not pth.exists(train_data_path):
os.system('unzip {}/{} -d {}'.format(data_base_path, train_zip_name, train_data_path))
# os.system('mv {}/*/*/* {}'.format(train_data_path, train_data_path))
place_name_list = [name for name in os.listdir(train_data_path) if not name.endswith('.JPG')]
for place_name in place_name_list:
place_fullpath = pth.join(train_data_path, place_name)
landmark_name_list = os.listdir(place_fullpath)
for landmark_name in landmark_name_list:
landmark_fullpath = pth.join(place_fullpath, landmark_name)
image_name_list = os.listdir(landmark_fullpath)
for image_name in image_name_list:
image_fullpath = pth.join(landmark_fullpath, image_name)
if not image_fullpath.endswith('.JPG'):
continue
shutil.move(image_fullpath, train_data_path)
if not pth.exists(test_data_path):
os.system('unzip {}/{} -d {}'.format(data_base_path, test_zip_name, test_data_path))
# os.system('mv {}/*/* {}'.format(test_data_path, test_data_path))
temp_name_list = [name for name in os.listdir(test_data_path) if not name.endswith('.JPG')]
for temp_name in temp_name_list:
temp_fullpath = pth.join(test_data_path, temp_name)
image_name_list = os.listdir(temp_fullpath)
for image_name in image_name_list:
image_fullpath = pth.join(temp_fullpath, image_name)
if not image_fullpath.endswith('.JPG'):
continue
shutil.move(image_fullpath, test_data_path)
train_csv_path = pth.join(data_base_path, train_csv_name)
train_df = pd.read_csv(train_csv_path)
train_dict = {k:v for k, v in train_df.values}
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
# submission_df.head()
train_df.head()
### Check all file is exist
for basename in tqdm(train_df['id']):
if not pth.exists(pth.join(train_data_path, basename+'.JPG')):
print(basename)
for basename in tqdm(submission_df['id']):
if not pth.exists(pth.join(test_data_path, basename+'.JPG')):
print(basename)
category_csv_path = pth.join(data_base_path, category_csv_name)
category_df = pd.read_csv(category_csv_path)
category_dict = {k:v for k, v in category_df.values}
category_df.head()
# category_json_path = pth.join(data_base_path, category_json_name)
# with open(category_json_path) as f:
# category_dict = json.load(f)
# category_dict
```
## 2. 추출한 csv와 생성한 이미지를 기반으로 tfrecord 생성
데이터를 읽는 오버헤드를 줄이기 위해 학습 데이터를 tfrecord형태로 새로 생성합니다
```
!pip install tensorflow
!pip install opencv-python
import tensorflow as tf
from tensorflow.keras.preprocessing import image
import cv2
import matplotlib.pyplot as plt
from PIL import Image
from sklearn.model_selection import train_test_split, KFold, RepeatedKFold, GroupKFold, RepeatedStratifiedKFold
from sklearn.utils import shuffle
import numpy as np
import pandas as pd
import os
import os.path as pth
import shutil
import time
from tqdm import tqdm
import numpy as np
from PIL import Image
from IPython.display import clear_output
from multiprocessing import Process, Queue
import datetime
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _floatarray_feature(array):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=array))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _validate_text(text):
"""If text is not str or unicode, then try to convert it to str."""
if isinstance(text, str):
return text
elif isinstance(text, 'unicode'):
return text.encode('utf8', 'ignore')
else:
return str(text)
def to_tfrecords(id_list, randmark_id_list, tfrecords_name):
print("Start converting")
options = tf.io.TFRecordOptions(compression_type = 'GZIP')
with tf.io.TFRecordWriter(path=pth.join(tfrecords_name+'.tfrecords'), options=options) as writer:
for id_, randmark_id in tqdm(zip(id_list, randmark_id_list), total=len(id_list), position=0, leave=True):
image_path = pth.join(train_data_path, id_ + '.JPG')
_binary_image = tf.io.read_file(image_path)
string_set = tf.train.Example(features=tf.train.Features(feature={
'image_raw': _bytes_feature(_binary_image),
'randmark_id': _int64_feature(randmark_id),
'id': _bytes_feature(id_.encode()),
}))
writer.write(string_set.SerializeToString())
```
Training 때 사용할 validation을 분리합니다. (Train:0.8, Validation:0.2)
```
train_ids, val_ids, train_landmark_ids, val_landmark_ids = train_test_split(train_df['id'], train_df['landmark_id'], test_size=0.2, random_state=7777, shuffle=True)
to_tfrecords(train_ids, train_landmark_ids, pth.join(data_base_path, 'all_train'))
to_tfrecords(val_ids, val_landmark_ids, pth.join(data_base_path, 'all_val'))
```
Testset 또한 속도를 위해 tfrecord 형태로 변환해줍니다.
```
def to_test_tfrecords(id_list, tfrecords_name):
print("Start converting")
options = tf.io.TFRecordOptions(compression_type = 'GZIP')
with tf.io.TFRecordWriter(path=pth.join(tfrecords_name+'.tfrecords'), options=options) as writer:
for id_ in tqdm(id_list, total=len(id_list), position=0, leave=True):
image_path = pth.join(test_data_path, id_+'.JPG')
_binary_image = tf.io.read_file(image_path)
string_set = tf.train.Example(features=tf.train.Features(feature={
'image_raw': _bytes_feature(_binary_image),
# 'randmark_id': _int64_feature(randmark_id),
'id': _bytes_feature(id_.encode()),
}))
writer.write(string_set.SerializeToString())
test_ids = submission_df['id']
to_test_tfrecords(test_ids, pth.join(data_base_path, 'test'))
```
### Usage
```
train_tfrecord_path = pth.join(data_base_path, 'all_train.tfrecords')
val_tfrecord_path = pth.join(data_base_path, 'all_val.tfrecords')
test_tfrecord_path = pth.join(data_base_path, 'test.tfrecords')
BUFFER_SIZE = 256
BATCH_SIZE = 64
NUM_CLASS = 1049
image_feature_description = {
'image_raw': tf.io.FixedLenFeature([], tf.string),
'randmark_id': tf.io.FixedLenFeature([], tf.int64),
# 'id': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
return tf.io.parse_single_example(example_proto, image_feature_description)
def map_func(target_record):
img = target_record['image_raw']
label = target_record['randmark_id']
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img, label
def prep_func(image, label):
result_image = image / 255
# result_image = tf.image.resize(image, (300, 300))
onehot_label = tf.one_hot(label, depth=NUM_CLASS)
return result_image, onehot_label
dataset = tf.data.TFRecordDataset(train_tfrecord_path, compression_type='GZIP')
dataset = dataset.map(_parse_image_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# dataset = dataset.cache()
dataset = dataset.map(map_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.map(prep_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
target_class = np.argmax(batch_y.numpy(), axis=1)
target_class
for batch_x, batch_y in dataset:
print(batch_x.shape, batch_y.shape)
target_class = np.argmax(batch_y[0].numpy())
print(category_dict[target_class])
plt.figure()
plt.imshow(batch_x[0].numpy())
# plt.title('{}'.format(category_dict[target_class]))
plt.show()
break
```
### TFRecords vs Normal benchmark
동일한 조건에서 순수한 파일 I/O 속도만을 비교하기 위해서 cache, prepetch, multiprocess와 같은 속도에 영향을 줄 수 있는 요소는 제외하고 측정하였습니다.
- TFRecords 사용 시
```
get_file(pth.join(data_base_path, 'all_train.tfrecords'))
get_file(pth.join(data_base_path, 'all_val.tfrecords'))
get_file(pth.join(data_base_path, 'test.tfrecords'))
dataset = tf.data.TFRecordDataset(train_tfrecord_path, compression_type='GZIP')
dataset = dataset.map(_parse_image_function)
for _ in tqdm(dataset, position=0, leave=True):
pass
```
- 일반적인 jpg파일 사용 시
```
train_ids, val_ids, train_landmark_ids, val_landmark_ids = train_test_split(train_df['id'], train_df['landmark_id'], test_size=0.2, random_state=7777, shuffle=True)
def load_image(image_path, label):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img, label
train_tfrecord_array = np.array([pth.join(data_base_path, 'train', img_name+'.JPG') for img_name in train_ids.values])
dataset = tf.data.Dataset.from_tensor_slices((train_tfrecord_array, train_landmark_ids))
dataset = dataset.map(load_image)
for _ in tqdm(dataset, position=0, leave=True):
pass
```
- 결과를 보았을 때, 5분 43초(TFRecords) vs 14분 40초(Normal)로 TFRecord를 사용하는 것이 3배 정도 더 빨랐습니다.
- jpg 이미지가 속도가 더 오래 걸리는 이유는, jpg 방식으로 압축되어 있는 이미지를 raw 이미지로 해독하는데 걸리는 시간으로 인한 오버헤드로 추정됩니다.
- 저 같은 경우 MobileNetV2 기반 모델이고, 코랩 T4 VGA 기준 학습 속도가 한 에폭에 8~9분정도 걸리는 상황이는 파일 I/O 속도가 전체 학습 속도에 미치는 영향은 상당히 큰 것으로 생각됩니다.
- 또한 실제 사용에서는 Multiprocessing이나 prefetch와 같은 기능도 같이 사용하기 떄문에 이를 사용해서도 테스트 해보겠습니다.
- TFRecords 사용 시
```
dataset = tf.data.TFRecordDataset(train_tfrecord_path, compression_type='GZIP')
dataset = dataset.map(_parse_image_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
for _ in tqdm(dataset, position=0, leave=True):
pass
```
- 일반적인 jpg로 로딩 시
```
dataset = tf.data.Dataset.from_tensor_slices((train_tfrecord_array, train_landmark_ids))
dataset = dataset.map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
for _ in tqdm(dataset, position=0, leave=True):
pass
```
- 결과를 보았을 때, 놀랍게도 TFRecord를 사용하는 것과 일반 이미지 로딩 방식이 거의 동일한 시간을 보이는 것을 확인할 수 있습니다.
- 코랩에서는 앞서 언급한 이미지 압축 해제에 대한 오버헤드를 tf.data의 부가기능을 활용하여 충분히 극복할 수 있는 것으로 보입니다.
- 또한, TRRecord는 그냥 읽는 것과 동일한 시간이 걸리는 것으로 확인되는데, 이는 TFRecord로 읽는 방식 자체가 파일 I/O 이외에 별다른 오버헤드가 크게 없어서 그런 것으로 추정됩니다.
| github_jupyter |
# Batch Normalization – Solutions
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package.
1. [Batch Normalization with `tf.layers.batch_normalization`](#example_1)
2. [Batch Normalization with `tf.nn.batch_normalization`](#example_2)
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook.
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
```
# Batch Normalization using `tf.layers.batch_normalization`<a id="example_1"></a>
This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization)
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
```
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
```
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
```
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
```
**Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
```
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs,
labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
# Add batch normalization
To add batch normalization to the layers created by `fully_connected`, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `dense` layer.
3. Used `tf.layers.batch_normalization` to normalize the layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately.
4. Passed the normalized values into a ReLU activation function.
```
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
```
To add batch normalization to the layers created by `conv_layer`, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `conv2d` layer.
3. Used `tf.layers.batch_normalization` to normalize the convolutional layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately.
4. Passed the normalized values into a ReLU activation function.
If you compare this function to `fully_connected`, you'll see that – when using `tf.layers` – there really isn't any difference between normalizing a fully connected layer and a convolutional layer. However, if you look at the second example in this notebook, where we restrict ourselves to the `tf.nn` package, you'll see a small difference.
```
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
```
Batch normalization is still a new enough idea that researchers are still discovering how best to use it. In general, people seem to agree to remove the layer's bias (because the batch normalization already has terms for scaling and shifting) and add batch normalization _before_ the layer's non-linear activation function. However, for some networks it will work well in other ways, too.
Just to demonstrate this point, the following three versions of `conv_layer` show other ways to implement batch normalization. If you try running with any of these versions of the function, they should all still work fine (although some versions may still work better than others).
**Alternate solution that uses bias in the convolutional layer but still adds batch normalization before the ReLU activation function.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
```
**Alternate solution that uses a bias and ReLU activation function _before_ batch normalization.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
```
**Alternate solution that uses a ReLU activation function _before_ normalization, but no bias.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=False, activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
```
To modify `train`, we did the following:
1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training.
2. Passed `is_training` to the `conv_layer` and `fully_connected` functions.
3. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`.
4. Moved the creation of `train_opt` inside a `with tf.control_dependencies...` statement. This is necessary to get the normalization layers created with `tf.layers.batch_normalization` to update their population statistics, which we need when performing inference.
```
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we're training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell TensorFlow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
With batch normalization, we now get excellent performance. In fact, validation accuracy is almost 94% after only 500 batches. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
# Batch Normalization using `tf.nn.batch_normalization`<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization).
This implementation of `fully_connected` is much more involved than the one that uses `tf.layers`. However, if you went through the `Batch_Normalization_Lesson` notebook, things should look pretty familiar. To add batch normalization, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `dense` layer.
3. Added `gamma`, `beta`, `pop_mean`, and `pop_variance` variables.
4. Used `tf.cond` to make handle training and inference differently.
5. When training, we use `tf.nn.moments` to calculate the batch mean and variance. Then we update the population statistics and use `tf.nn.batch_normalization` to normalize the layer's output using the batch statistics. Notice the `with tf.control_dependencies...` statement - this is required to force TensorFlow to run the operations that update the population statistics.
6. During inference (i.e. when not training), we use `tf.nn.batch_normalization` to normalize the layer's output using the population statistics we calculated during training.
7. Passed the normalized values into a ReLU activation function.
If any of thise code is unclear, it is almost identical to what we showed in the `fully_connected` function in the `Batch_Normalization_Lesson` notebook. Please see that for extensive comments.
```
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
```
The changes we made to `conv_layer` are _almost_ exactly the same as the ones we made to `fully_connected`. However, there is an important difference. Convolutional layers have multiple feature maps, and each feature map uses shared weights. So we need to make sure we calculate our batch and population statistics **per feature map** instead of per node in the layer.
To accomplish this, we do **the same things** that we did in `fully_connected`, with two exceptions:
1. The sizes of `gamma`, `beta`, `pop_mean` and `pop_variance` are set to the number of feature maps (output channels) instead of the number of output nodes.
2. We change the parameters we pass to `tf.nn.moments` to make sure it calculates the mean and variance for the correct dimensions.
```
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
# Important to use the correct dimensions here to ensure the mean and variance are calculated
# per feature map instead of for the entire layer
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
```
To modify `train`, we did the following:
1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training.
2. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`.
3. We did **not** need to add the `with tf.control_dependencies...` statement that we added in the network that used `tf.layers.batch_normalization` because we handled updating the population statistics ourselves in `conv_layer` and `fully_connected`.
```
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we're training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
Once again, the model with batch normalization quickly reaches a high accuracy. But in our run, notice that it doesn't seem to learn anything for the first 250 batches, then the accuracy starts to climb. That just goes to show - even with batch normalization, it's important to give your network a bit of time to learn before you decide it isn't working.
| github_jupyter |
## In situ data and trajectories incl. Bepi Colombo, PSP, Solar Orbiter
https://github.com/cmoestl/heliocats
Author: C. Moestl, IWF Graz, Austria
twitter @chrisoutofspace, https://github.com/cmoestl
last update: 2021 August 24
needs python 3.7 with the conda helio environment (see README.md)
uses heliopy for generating spacecraft positions, for data source files see README.md
---
MIT LICENSE
Copyright 2020-2021, Christian Moestl
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be included in all copies
or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
#change path for ffmpeg for animation production if needed
ffmpeg_path=''
import os
import datetime
from datetime import datetime, timedelta
from sunpy.time import parse_time
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.cm as cmap
from scipy.signal import medfilt
import numpy as np
import pdb
import pickle
import seaborn as sns
import sys
import heliopy.data.spice as spicedata
import heliopy.spice as spice
import astropy
import importlib
import time
import numba
from numba import jit
import multiprocessing
import urllib
import copy
from astropy import constants as const
import warnings
warnings.filterwarnings('ignore')
from heliocats import data as hd
importlib.reload(hd) #reload again while debugging
from heliocats import plot as hp
importlib.reload(hp) #reload again while debugging
#where the in situ data files are located is read
#from config.py
import config
importlib.reload(config)
from config import data_path
```
## load HIGeoCAT
```
#load HIGeoCAT
from heliocats import cats as hc
importlib.reload(hc) #reload again while debugging
#https://www.helcats-fp7.eu/
#LOAD HELCATS HIGeoCAT
url_higeocat='https://www.helcats-fp7.eu/catalogues/data/HCME_WP3_V06.vot'
try: urllib.request.urlretrieve(url_higeocat,'data/HCME_WP3_V06.vot')
except urllib.error.URLError as e:
print('higeocat not loaded')
higeocat=hc.load_higeocat_vot('data/HCME_WP3_V06.vot')
higeocat_time=parse_time(higeocat['Date']).datetime
higeocat_t0=parse_time(higeocat['SSE Launch']).datetime #backprojected launch time
sse_speed=higeocat['SSE Speed']
sse_lon=higeocat['SSE HEEQ Long']
sse_lat=higeocat['SSE HEEQ Lat']
higeocat_name=np.array(higeocat['SC'].astype(str))
print('done')
```
## generate HIGeoCAT kinematics
```
print('generate kinematics for each SSEF30 CME')
generate_hi_kin=False
if generate_hi_kin:
t0=higeocat_t0
kindays=60
#lists for all times, r, longitude, latitude
all_time=[]
all_r=[]
all_lat=[]
all_lon=[]
all_name=[]
#go through all HI CMEs
for i in np.arange(len(higeocat)):
#for i in np.arange(100):
#times for each event kinematic
time1=[]
tstart1=copy.deepcopy(t0[i])
tend1=tstart1+timedelta(days=kindays)
#make 30 min datetimes
while tstart1 < tend1:
time1.append(tstart1)
tstart1 += timedelta(minutes=30)
#make kinematics
timestep=np.zeros(kindays*24*2)
cme_r=np.zeros(kindays*24*2)
cme_lon=np.zeros(kindays*24*2)
cme_lat=np.zeros(kindays*24*2)
cme_name=np.chararray(kindays*24*2)
for j in np.arange(0,len(cme_r)-1,1):
cme_r[j]=sse_speed[i]*timestep[j]/(const.au.value*1e-3) #km to AU
cme_lon[j]=sse_lon[i]
cme_lat[j]=sse_lat[i]
timestep[j+1]=timestep[j]+30*60 #seconds
cme_name[j]=higeocat_name[i]
#### linear interpolate to 30 min resolution
#find next full hour after t0
format_str = '%Y-%m-%d %H'
t0r = datetime.strptime(datetime.strftime(t0[i], format_str), format_str) +timedelta(hours=1)
time2=[]
tstart2=copy.deepcopy(t0r)
tend2=tstart2+timedelta(days=kindays)
#make 30 min datetimes
while tstart2 < tend2:
time2.append(tstart2)
tstart2 += timedelta(minutes=30)
time2_num=parse_time(time2).plot_date
time1_num=parse_time(time1).plot_date
#linear interpolation to time_mat times
cme_r = np.interp(time2_num, time1_num,cme_r )
cme_lat = np.interp(time2_num, time1_num,cme_lat )
cme_lon = np.interp(time2_num, time1_num,cme_lon )
#cut at 5 AU
cutoff=np.where(cme_r<5)[0]
#write to all
#print(cutoff[0],cutoff[-1])
all_time.extend(time2[cutoff[0]:cutoff[-2]])
all_r.extend(cme_r[cutoff[0]:cutoff[-2]])
all_lat.extend(cme_lat[cutoff[0]:cutoff[-2]])
all_lon.extend(cme_lon[cutoff[0]:cutoff[-2]])
all_name.extend(cme_name[cutoff[0]:cutoff[-2]])
plt.figure(1)
plt.plot(all_time,all_r)
plt.figure(2)
plt.plot(all_time,all_lat,'ok')
plt.figure(3)
plt.plot(all_time,all_lon,'ok')
################### sort all kinematics by time
all_time_num=mdates.date2num(all_time)
all_r=np.array(all_r)
all_lat=np.array(all_lat)
all_lon=np.array(all_lon)
all_name=np.array(all_name)
#get indices for sorting for time
sortind=np.argsort(all_time_num,axis=0)
#cme_time_sort=mdates.num2date(all_time_num[sortind])
cme_time_sort_num=all_time_num[sortind]
cme_r_sort=all_r[sortind]
cme_lat_sort=all_lat[sortind]
cme_lon_sort=all_lon[sortind]
cme_name_sort=all_name[sortind].astype(str)
#plt.plot(cme_time_sort,cme_r_sort)
#plt.plot(cme_time_sort,cme_r_sort)
plt.figure(4)
plt.plot(all_time,all_lon,'.k')
plt.plot(cme_time_sort_num,cme_lon_sort,'.b')
pickle.dump([cme_time_sort_num,cme_r_sort,cme_lat_sort,cme_lon_sort,cme_name_sort], open('data/higeocat_kinematics.p', "wb"))
print('load HIGEOCAT kinematics')
[hc_time_num,hc_r,hc_lat,hc_lon,hc_name]=pickle.load(open('data/higeocat_kinematics.p', "rb"))
print('done')
```
### define functions
```
def make_positions():
############### PSP
starttime =datetime(2018, 8,13)
endtime = datetime(2025, 8, 31)
psp_time = []
while starttime < endtime:
psp_time.append(starttime)
starttime += timedelta(days=res_in_days)
psp_time_num=mdates.date2num(psp_time)
spice.furnish(spicedata.get_kernel('psp_pred'))
psp=spice.Trajectory('SPP')
psp.generate_positions(psp_time,'Sun',frame)
print('PSP pos')
psp.change_units(astropy.units.AU)
[psp_r, psp_lat, psp_lon]=hd.cart2sphere(psp.x,psp.y,psp.z)
print('PSP conv')
############### BepiColombo
starttime =datetime(2018, 10, 21)
endtime = datetime(2025, 11, 2)
bepi_time = []
while starttime < endtime:
bepi_time.append(starttime)
starttime += timedelta(days=res_in_days)
bepi_time_num=mdates.date2num(bepi_time)
spice.furnish(spicedata.get_kernel('bepi_pred'))
bepi=spice.Trajectory('BEPICOLOMBO MPO') # or BEPICOLOMBO MMO
bepi.generate_positions(bepi_time,'Sun',frame)
bepi.change_units(astropy.units.AU)
[bepi_r, bepi_lat, bepi_lon]=hd.cart2sphere(bepi.x,bepi.y,bepi.z)
print('Bepi')
############### Solar Orbiter
starttime = datetime(2020, 3, 1)
endtime = datetime(2029, 12, 31)
solo_time = []
while starttime < endtime:
solo_time.append(starttime)
starttime += timedelta(days=res_in_days)
solo_time_num=mdates.date2num(solo_time)
spice.furnish(spicedata.get_kernel('solo_2020'))
solo=spice.Trajectory('Solar Orbiter')
solo.generate_positions(solo_time, 'Sun',frame)
solo.change_units(astropy.units.AU)
[solo_r, solo_lat, solo_lon]=hd.cart2sphere(solo.x,solo.y,solo.z)
print('Solo')
########### plots
plt.figure(1, figsize=(12,9))
plt.plot_date(psp_time,psp_r,'-', label='R')
plt.plot_date(psp_time,psp_lat,'-',label='lat')
plt.plot_date(psp_time,psp_lon,'-',label='lon')
plt.ylabel('AU / RAD')
plt.legend()
plt.figure(2, figsize=(12,9))
plt.plot_date(bepi_time,bepi_r,'-', label='R')
plt.plot_date(bepi_time,bepi_lat,'-',label='lat')
plt.plot_date(bepi_time,bepi_lon,'-',label='lon')
plt.title('Bepi Colombo position '+frame)
plt.ylabel('AU / RAD')
plt.legend()
plt.figure(3, figsize=(12,9))
plt.plot_date(solo_time,solo_r,'-', label='R')
plt.plot_date(solo_time,solo_lat,'-',label='lat')
plt.plot_date(solo_time,solo_lon,'-',label='lon')
plt.title('Solar Orbiter position '+frame)
plt.ylabel('AU / RAD')
plt.legend()
######## R with all three
plt.figure(4, figsize=(16,10))
plt.plot_date(psp_time,psp.r,'-',label='PSP')
plt.plot_date(bepi_time,bepi.r,'-',label='Bepi Colombo')
plt.plot_date(solo_time,solo.r,'-',label='Solar Orbiter')
plt.legend()
plt.title('Heliocentric distance of heliospheric observatories')
plt.ylabel('AU')
plt.savefig(positions_plot_directory+'/bepi_psp_solo_R.png')
##### Longitude all three
plt.figure(5, figsize=(16,10))
plt.plot_date(psp_time,psp_lon*180/np.pi,'-',label='PSP')
plt.plot_date(bepi_time,bepi_lon*180/np.pi,'-',label='Bepi Colombo')
plt.plot_date(solo_time,solo_lon*180/np.pi,'-',label='Solar Orbiter')
plt.legend()
plt.title(frame+' longitude')
plt.ylabel('DEG')
plt.savefig(positions_plot_directory+'/bepi_psp_solo_longitude_'+frame+'.png')
############# Earth, Mercury, Venus, STA
#see https://docs.heliopy.org/en/stable/data/spice.html
planet_kernel=spicedata.get_kernel('planet_trajectories')
starttime =datetime(2018, 1, 1)
endtime = datetime(2029, 12, 31)
earth_time = []
while starttime < endtime:
earth_time.append(starttime)
starttime += timedelta(days=res_in_days)
earth_time_num=mdates.date2num(earth_time)
earth=spice.Trajectory('399') #399 for Earth, not barycenter (because of moon)
earth.generate_positions(earth_time,'Sun',frame)
earth.change_units(astropy.units.AU)
[earth_r, earth_lat, earth_lon]=hd.cart2sphere(earth.x,earth.y,earth.z)
print('Earth')
################ mercury
mercury_time_num=earth_time_num
mercury=spice.Trajectory('1') #barycenter
mercury.generate_positions(earth_time,'Sun',frame)
mercury.change_units(astropy.units.AU)
[mercury_r, mercury_lat, mercury_lon]=hd.cart2sphere(mercury.x,mercury.y,mercury.z)
print('mercury')
################# venus
venus_time_num=earth_time_num
venus=spice.Trajectory('2')
venus.generate_positions(earth_time,'Sun',frame)
venus.change_units(astropy.units.AU)
[venus_r, venus_lat, venus_lon]=hd.cart2sphere(venus.x,venus.y,venus.z)
print('venus')
############### Mars
mars_time_num=earth_time_num
mars=spice.Trajectory('4')
mars.generate_positions(earth_time,'Sun',frame)
mars.change_units(astropy.units.AU)
[mars_r, mars_lat, mars_lon]=hd.cart2sphere(mars.x,mars.y,mars.z)
print('mars')
#############stereo-A
sta_time_num=earth_time_num
spice.furnish(spicedata.get_kernel('stereo_a_pred'))
sta=spice.Trajectory('-234')
sta.generate_positions(earth_time,'Sun',frame)
sta.change_units(astropy.units.AU)
[sta_r, sta_lat, sta_lon]=hd.cart2sphere(sta.x,sta.y,sta.z)
print('STEREO-A')
#save positions
if high_res_mode:
pickle.dump([psp_time,psp_time_num,psp_r,psp_lon,psp_lat,bepi_time,bepi_time_num,bepi_r,bepi_lon,bepi_lat,solo_time,solo_time_num,solo_r,solo_lon,solo_lat], open( 'positions_plots/psp_solo_bepi_'+frame+'_1min.p', "wb" ) )
else:
psp=np.rec.array([psp_time_num,psp_r,psp_lon,psp_lat, psp.x, psp.y,psp.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
bepi=np.rec.array([bepi_time_num,bepi_r,bepi_lon,bepi_lat,bepi.x, bepi.y,bepi.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
solo=np.rec.array([solo_time_num,solo_r,solo_lon,solo_lat,solo.x, solo.y,solo.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
sta=np.rec.array([sta_time_num,sta_r,sta_lon,sta_lat,sta.x, sta.y,sta.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
earth=np.rec.array([earth_time_num,earth_r,earth_lon,earth_lat, earth.x, earth.y,earth.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
venus=np.rec.array([venus_time_num,venus_r,venus_lon,venus_lat, venus.x, venus.y,venus.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
mars=np.rec.array([mars_time_num,mars_r,mars_lon,mars_lat, mars.x, mars.y,mars.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
mercury=np.rec.array([mercury_time_num,mercury_r,mercury_lon,mercury_lat,mercury.x, mercury.y,mercury.z],dtype=[('time','f8'),('r','f8'),('lon','f8'),('lat','f8'),('x','f8'),('y','f8'),('z','f8')])
pickle.dump([psp, bepi, solo, sta, earth, venus, mars, mercury,frame], open( 'data/positions_psp_solo_bepi_sta_planets_'+frame+'_1hour.p', "wb" ) )
#load with [psp, bepi, solo, sta, earth, venus, mars, mercury,frame]=pickle.load( open( 'positions_psp_solo_bepi_sta_planets_HCI_6hours_2018_2025.p', "rb" ) )
end=time.time()
print( 'generate position took time in seconds:', round((end-start),1) )
def make_frame(k):
'''
loop each frame in multiprocessing
'''
fig=plt.figure(1, figsize=(19.2,10.8), dpi=100) #full hd
#fig=plt.figure(1, figsize=(19.2*2,10.8*2), dpi=100) #4k
ax = plt.subplot2grid((7,2), (0, 0), rowspan=7, projection='polar')
backcolor='black'
psp_color='black'
bepi_color='blue'
solo_color='coral'
frame_time_str=str(mdates.num2date(frame_time_num+k*res_in_days))
#print( 'current frame_time_num', frame_time_str, ' ',k)
#these have their own times
dct=frame_time_num+k*res_in_days-psp.time
psp_timeind=np.argmin(abs(dct))
dct=frame_time_num+k*res_in_days-bepi.time
bepi_timeind=np.argmin(abs(dct))
dct=frame_time_num+k*res_in_days-solo.time
solo_timeind=np.argmin(abs(dct))
#all same times
dct=frame_time_num+k*res_in_days-earth.time
earth_timeind=np.argmin(abs(dct))
#plot all positions including text R lon lat for some
#white background
ax.scatter(venus.lon[earth_timeind], venus.r[earth_timeind]*np.cos(venus.lat[earth_timeind]), s=symsize_planet, c='orange', alpha=1,lw=0,zorder=3)
ax.scatter(mercury.lon[earth_timeind], mercury.r[earth_timeind]*np.cos(mercury.lat[earth_timeind]), s=symsize_planet, c='dimgrey', alpha=1,lw=0,zorder=3)
ax.scatter(earth.lon[earth_timeind], earth.r[earth_timeind]*np.cos(earth.lat[earth_timeind]), s=symsize_planet, c='mediumseagreen', alpha=1,lw=0,zorder=3)
ax.scatter(sta.lon[earth_timeind], sta.r[earth_timeind]*np.cos(sta.lat[earth_timeind]), s=symsize_spacecraft, c='red', marker='s', alpha=1,lw=0,zorder=3)
ax.scatter(mars.lon[earth_timeind], mars.r[earth_timeind]*np.cos(mars.lat[earth_timeind]), s=symsize_planet, c='orangered', alpha=1,lw=0,zorder=3)
#plot stereoa fov hi1/2
hp.plot_stereo_hi_fov(sta,frame_time_num, earth_timeind, ax,'A')
#positions text
f10=plt.figtext(0.01,0.93,' R lon lat', fontsize=fsize+2, ha='left',color=backcolor)
if frame=='HEEQ': earth_text='Earth: '+str(f'{earth.r[earth_timeind]:6.2f}')+str(f'{0.0:8.1f}')+str(f'{np.rad2deg(earth.lat[earth_timeind]):8.1f}')
else: earth_text='Earth: '+str(f'{earth.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(earth.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(earth.lat[earth_timeind]):8.1f}')
mars_text='Mars: '+str(f'{mars.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(mars.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(mars.lat[earth_timeind]):8.1f}')
sta_text='STA: '+str(f'{sta.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(sta.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(sta.lat[earth_timeind]):8.1f}')
#position and text
if psp_timeind > 0:
#plot trajectorie
ax.scatter(psp.lon[psp_timeind], psp.r[psp_timeind]*np.cos(psp.lat[psp_timeind]), s=symsize_spacecraft, c=psp_color, marker='s', alpha=1,lw=0,zorder=3)
#plot positiona as text
psp_text='PSP: '+str(f'{psp.r[psp_timeind]:6.2f}')+str(f'{np.rad2deg(psp.lon[psp_timeind]):8.1f}')+str(f'{np.rad2deg(psp.lat[psp_timeind]):8.1f}')
f5=plt.figtext(0.01,0.78,psp_text, fontsize=fsize, ha='left',color=psp_color)
if plot_orbit:
fadestart=psp_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(psp.lon[fadestart:psp_timeind+fadeind], psp.r[fadestart:psp_timeind+fadeind]*np.cos(psp.lat[fadestart:psp_timeind+fadeind]), c=psp_color, alpha=0.6,lw=1,zorder=3)
if bepi_timeind > 0:
ax.scatter(bepi.lon[bepi_timeind], bepi.r[bepi_timeind]*np.cos(bepi.lat[bepi_timeind]), s=symsize_spacecraft, c=bepi_color, marker='s', alpha=1,lw=0,zorder=3)
bepi_text='Bepi: '+str(f'{bepi.r[bepi_timeind]:6.2f}')+str(f'{np.rad2deg(bepi.lon[bepi_timeind]):8.1f}')+str(f'{np.rad2deg(bepi.lat[bepi_timeind]):8.1f}')
f6=plt.figtext(0.01,0.74,bepi_text, fontsize=fsize, ha='left',color=bepi_color)
if plot_orbit:
fadestart=bepi_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(bepi.lon[fadestart:bepi_timeind+fadeind], bepi.r[fadestart:bepi_timeind+fadeind]*np.cos(bepi.lat[fadestart:bepi_timeind+fadeind]), c=bepi_color, alpha=0.6,lw=1,zorder=3)
if solo_timeind > 0:
ax.scatter(solo.lon[solo_timeind], solo.r[solo_timeind]*np.cos(solo.lat[solo_timeind]), s=symsize_spacecraft, c=solo_color, marker='s', alpha=1,lw=0,zorder=3)
solo_text='SolO: '+str(f'{solo.r[solo_timeind]:6.2f}')+str(f'{np.rad2deg(solo.lon[solo_timeind]):8.1f}')+str(f'{np.rad2deg(solo.lat[solo_timeind]):8.1f}')
f7=plt.figtext(0.01,0.7,solo_text, fontsize=fsize, ha='left',color=solo_color)
if plot_orbit:
fadestart=solo_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(solo.lon[fadestart:solo_timeind+fadeind], solo.r[fadestart:solo_timeind+fadeind]*np.cos(solo.lat[fadestart:solo_timeind+fadeind]), c=solo_color, alpha=0.6,lw=1,zorder=3)
f10=plt.figtext(0.01,0.9,earth_text, fontsize=fsize, ha='left',color='mediumseagreen')
f9=plt.figtext(0.01,0.86,mars_text, fontsize=fsize, ha='left',color='orangered')
f8=plt.figtext(0.01,0.82,sta_text, fontsize=fsize, ha='left',color='red')
######################## 1 plot all active CME circles
plot_hi_geo=True
if plot_hi_geo:
lamda=30
#check for active CME indices from HIGeoCAT (with the lists produced above in this notebook)
#check where time is identical to frame time
cmeind=np.where(hc_time_num == frame_time_num+k*res_in_days)
#print(cmeind)
#plot all active CME circles
#if np.size(cmeind) >0:
for p in range(0,np.size(cmeind)):
#print p, h.all_apex_long[cmeind[0][p]], h.all_apex_r[cmeind[0][p]]
#central d
dir=np.array([np.cos(hc_lon[cmeind[0][p]]*np.pi/180),np.sin(hc_lon[cmeind[0][p]]*np.pi/180)])*hc_r[cmeind[0][p]]
#points on circle, correct for longitude
circ_ang = ((np.arange(111)*2-20)*np.pi/180)-(hc_lon[cmeind[0][p]]*np.pi/180)
#these equations are from moestl and davies 2013
xc = 0+dir[0]/(1+np.sin(lamda*np.pi/180)) + (hc_r[cmeind[0][p]]*np.sin(lamda*np.pi/180)/(1+np.sin(lamda*np.pi/180)))*np.sin(circ_ang)
yc = 0+dir[1]/(1+np.sin(lamda*np.pi/180)) + (hc_r[cmeind[0][p]]*np.sin(lamda*np.pi/180)/(1+np.sin(lamda*np.pi/180)))*np.cos(circ_ang)
#now convert to polar coordinates
rcirc=np.sqrt(xc**2+yc**2)
longcirc=np.arctan2(yc,xc)
#plot in correct color
if hc_name[cmeind[0][p]] == 'A':
#make alpha dependent on distance to solar equatorial plane - maximum latitude is -40/+40 -
#so to make also the -/+40 latitude CME visible, divide by 50 so alpha > 0 for these events
ax.plot(longcirc,rcirc, c='red', alpha=1-abs(hc_lat[cmeind[0][p]]/50), lw=1.5)
if hc_name[cmeind[0][p]] == 'B':
ax.plot(longcirc,rcirc, c='royalblue', alpha=1-abs(hc_lat[cmeind[0][p]]/50), lw=1.5)
#parker spiral
if plot_parker:
for q in np.arange(0,12):
omega=2*np.pi/(sun_rot*60*60*24) #solar rotation in seconds
v=400/AUkm #km/s
r0=695000/AUkm
r=v/omega*theta+r0*7
if not black:
ax.plot(-theta+np.deg2rad(0+(360/24.47)*res_in_days*k+360/12*q), r, alpha=0.4, lw=0.5,color='grey',zorder=2)
if black:
ax.plot(-theta+np.deg2rad(0+(360/24.47)*res_in_days*k+360/12*q), r, alpha=0.7, lw=0.7,color='grey',zorder=2)
#set axes and grid
ax.set_theta_zero_location('E')
#plt.thetagrids(range(0,360,45),(u'0\u00b0 '+frame+' longitude',u'45\u00b0',u'90\u00b0',u'135\u00b0',u'+/- 180\u00b0',u'- 135\u00b0',u'- 90\u00b0',u'- 45\u00b0'), ha='right', fmt='%d',fontsize=fsize-1,color=backcolor, alpha=0.9)
plt.thetagrids(range(0,360,45),(u'0\u00b0',u'45\u00b0',u'90\u00b0',u'135\u00b0',u'+/- 180\u00b0',u'- 135\u00b0',u'- 90\u00b0',u'- 45\u00b0'), ha='center', fmt='%d',fontsize=fsize-1,color=backcolor, alpha=0.9,zorder=4)
#plt.rgrids((0.10,0.39,0.72,1.00,1.52),('0.10','0.39','0.72','1.0','1.52 AU'),angle=125, fontsize=fsize,alpha=0.9, color=backcolor)
plt.rgrids((0.1,0.3,0.5,0.7,1.0),('0.10','0.3','0.5','0.7','1.0 AU'),angle=125, fontsize=fsize-3,alpha=0.5, color=backcolor)
#ax.set_ylim(0, 1.75) #with Mars
ax.set_ylim(0, 1.2)
#Sun
ax.scatter(0,0,s=100,c='yellow',alpha=1, edgecolors='black', linewidth=0.3)
#------------------------------------------------ IN SITU DATA ------------------------------------------------------
time_now=frame_time_num+k*res_in_days
#cut data for plot window so faster
windex1=np.where(w_time_num > time_now-days_window)[0][0]
windex2=np.where(w_time_num > time_now+days_window)[0][0]
w=w1[windex1:windex2]
sindex1=np.where(s_time_num > time_now-days_window)[0][0]
sindex2=np.where(s_time_num > time_now+days_window)[0][0]
s=s1[sindex1:sindex2]
#is data available from new missions?
if p_time_num[-1] > time_now+days_window:
pindex1=np.where(p_time_num > time_now-days_window)[0][0]
pindex2=np.where(p_time_num > time_now+days_window)[0][0]
#pindex2=np.size(p1)-1
p=p1[pindex1:pindex2]
elif np.logical_and((p_time_num[-1] < time_now+days_window),(p_time_num[-1] > time_now-days_window)):
pindex1=np.where(p_time_num > time_now-days_window)[0][0]
pindex2=np.size(p1)-1
p=p1[pindex1:pindex2]
else: p=[]
if o_time_num[-1] > time_now+days_window:
oindex1=np.where(o_time_num > time_now-days_window)[0][0]
oindex2=np.where(o_time_num > time_now+days_window)[0][0]
#use last index oindex2=np.size(o1)-1
o=o1[oindex1:oindex2]
elif np.logical_and((o_time_num[-1] < time_now+days_window),(o_time_num[-1] > time_now-days_window)):
oindex1=np.where(o_time_num > time_now-days_window)[0][0]
oindex2=np.size(o1)-1
o=o1[oindex1:oindex2]
else: o=[]
if b_time_num[-1] > time_now+days_window:
bindex1=np.where(b_time_num > time_now-days_window)[0][0]
bindex2=np.where(b_time_num > time_now+days_window)[0][0]
#bindex2=np.size(b1)-1
b=b1[bindex1:bindex2]
else: b=[]
#---------------- Wind mag
ax4 = plt.subplot2grid((7,2), (0, 1))
#plt.plot_date(w_tm,wbx,'-r',label='BR',linewidth=0.5)
#plt.plot_date(w_tm,wby,'-g',label='BT',linewidth=0.5)
#plt.plot_date(w_tm,wbz,'-b',label='BN',linewidth=0.5)
#plt.plot_date(w_tm,wbt,'-k',label='Btotal',lw=0.5)
plt.plot_date(w.time,w.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(w.time,w.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(w.time,w.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(w.time,w.bt,'-k',label='Btotal',lw=0.5)
ax4.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax4.set_ylabel('B [nT] HEEQ',fontsize=fsize-1)
ax4.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax4.set_xlim(time_now-days_window,time_now+days_window)
ax4.set_ylim(np.nanmin(-w.bt)-5, np.nanmax(w.bt)+5)
#plt.ylim((-18, 18))
plt.yticks(fontsize=fsize-1)
ax4.set_xticklabels([])
#---------------- STEREO-A mag
ax6 = plt.subplot2grid((7,2), (1, 1))
#plt.plot_date(s_tm,sbx,'-r',label='BR',linewidth=0.5)
#plt.plot_date(s_tm,sby,'-g',label='BT',linewidth=0.5)
#plt.plot_date(s_tm,sbz,'-b',label='BN',linewidth=0.5)
#plt.plot_date(s_tm,sbt,'-k',label='Btotal')
plt.plot_date(s.time,s.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(s.time,s.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(s.time,s.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(s.time,s.bt,'-k',label='Btotal',linewidth=0.5)
ax6.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax6.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
#ax6.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax6.set_xlim(time_now-days_window,time_now+days_window)
ax6.set_xticklabels([])
ax6.set_ylim(np.nanmin(-s.bt)-5, np.nanmax(s.bt)+5)
plt.yticks(fontsize=fsize-1)
plt.tick_params( axis='x', labelbottom='off')
#plt.ylim((-18, 18))
#---------------- STEREO, Wind speed
ax5 = plt.subplot2grid((7,2), (2, 1))
plt.plot_date(w.time,w.vt,'-g',label='Wind',linewidth=0.7)
plt.plot_date(s.time,s.vt,'-r',label='STEREO-A',linewidth=0.7)
#ax5.legend(loc=1, fontsize=10)
ax5.plot_date([time_now,time_now], [0,900],'-k', lw=0.5, alpha=0.8)
ax5.set_xlim(time_now-days_window,time_now+days_window)
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.ylim((240, 750))
plt.yticks(fontsize=fsize-1)
ax5.set_xticklabels([])
#ax7 = plt.subplot2grid((6,2), (5, 1))
#plt.plot_date(s.time,s.vt,'-k',label='V',linewidth=0.7)
#ax7.plot_date([time_now,time_now], [0,800],'-k', lw=0.5, alpha=0.8)
#ax7.set_xlim(time_now-days_window,time_now+days_window)
#ax7.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
#plt.ylabel('V [km/s]',fontsize=fsize-1)
#plt.tick_params(axis='x', labelbottom='off')
#plt.ylim((240, 810))
#plt.yticks(fontsize=fsize-1)
#plt.xticks(fontsize=fsize)
#---------------------- PSP speed
ax3 = plt.subplot2grid((7,2), (3, 1))
ax3.plot_date([time_now,time_now], [0,1000],'-k', lw=0.5, alpha=0.8)
ax3.set_xticklabels([])
ax3.set_xlim(time_now-days_window,time_now+days_window)
ax3.set_ylim((240, 810))
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
ax3.set_xticklabels([])
if np.size(p)>0:
#plt.plot_date(p_tp,pv,'-k',label='V',linewidth=0.5)
plt.plot_date(p.time,p.vt,'-k',label='V',linewidth=0.7)
ax3.set_xlim(time_now-days_window,time_now+days_window)
ax3.plot_date([time_now,time_now], [0,800],'-k', lw=0.5, alpha=0.8)
ax3.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.ylim((240, 750))
plt.yticks(fontsize=fsize-1)
ax3.set_xticklabels([])
#---------------------- PSP mag
ax2 = plt.subplot2grid((7,2), (4, 1))
ax2.plot_date([time_now,time_now], [-1000,1000],'-k', lw=0.5, alpha=0.8)
ax2.set_xticklabels([])
ax2.set_xlim(time_now-days_window,time_now+days_window)
ax2.set_ylim((-18, 18))
ax2.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
#when there is data, plot:
if np.size(p)>0:
plt.plot_date(p.time,p.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(p.time,p.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(p.time,p.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(p.time,p.bt,'-k',label='Btotal',lw=0.5)
ax2.plot_date([time_now,time_now], [-1000,1000],'-k', lw=0.5, alpha=0.8)
ax2.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax2.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax2.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmin(-p.bt)): ax2.set_ylim(np.nanmin(-p.bt)-5, np.nanmax(p.bt)+5)
ax2.set_xticklabels([])
plt.yticks(fontsize=fsize-1)
#---------------------- SolO mag
ax7 = plt.subplot2grid((7,2), (5, 1))
ax7.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax7.set_xticklabels([])
ax7.set_xlim(time_now-days_window,time_now+days_window)
ax7.set_ylim((-18, 18))
ax7.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
ax7.set_xticklabels([])
#when there is data, plot:
if np.size(o)>0:
plt.plot_date(o.time,o.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(o.time,o.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(o.time,o.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(o.time,o.bt,'-k',label='Btotal',lw=0.5)
ax7.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax7.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax7.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmax(o.bt)):
ax7.set_ylim((np.nanmin(-o.bt)-5, np.nanmax(o.bt)+5))
else:
ax7.set_ylim((-15, 15))
ax7.set_xticklabels([])
plt.yticks(fontsize=fsize-1)
#---------------------- Bepi mag
ax8 = plt.subplot2grid((7,2), (6, 1))
ax8.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax8.set_xlim(time_now-days_window,time_now+days_window)
ax8.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax8.set_ylim((-18, 18))
ax8.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
if np.size(b)>0:
plt.plot_date(b.time,b.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(b.time,b.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(b.time,b.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(b.time,b.bt,'-k',label='Btotal',lw=0.5)
ax8.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax8.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax8.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax8.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmax(b.bt)):
ax8.set_ylim((np.nanmin(-b.bt)-5, np.nanmax(b.bt)+5))
else:
ax8.set_ylim((-15, 15))
#ax8.set_ylim((np.nanmin(-b.bt)-5, np.nanmax(b.bt)+5))
plt.yticks(fontsize=fsize-1)
plt.figtext(0.95,0.82,'Wind', color='mediumseagreen', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.71,'STEREO-A', color='red', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.63,'Wind', color='mediumseagreen', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.58,'STEREO-A', color='red', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.49,'PSP ', color='black', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.38,'PSP ', color='black', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.28,'Solar Orbiter', color='coral', ha='center',fontsize=fsize+5)
plt.figtext(0.95,0.16,'BepiColombo', color='blue', ha='center',fontsize=fsize+5)
############################
#plot text for date extra so it does not move
#year
f1=plt.figtext(0.45,0.93,frame_time_str[0:4], ha='center',color=backcolor,fontsize=fsize+6)
#month
f2=plt.figtext(0.45+0.04,0.93,frame_time_str[5:7], ha='center',color=backcolor,fontsize=fsize+6)
#day
f3=plt.figtext(0.45+0.08,0.93,frame_time_str[8:10], ha='center',color=backcolor,fontsize=fsize+6)
#hours
f4=plt.figtext(0.45+0.12,0.93,frame_time_str[11:13], ha='center',color=backcolor,fontsize=fsize+6)
plt.figtext(0.02, 0.02,'Spacecraft trajectories '+frame+' 2D projection', fontsize=fsize-1, ha='left',color=backcolor)
plt.figtext(0.32,0.02,'――― trajectory from - 60 days to + 60 days', color='black', ha='center',fontsize=fsize-1)
#signature
#BC MPO-MAG (IGEP/IWF/ISAS/IC)
#auch für Solar Orbiter (MAG, IC), Parker (FIELDS, UCB), STA (IMPACT/PLASTIC, UNH, UCLA), Wind (MFI, SWE, NASA??) STA-HI (RAL)
plt.figtext(0.85,0.02,'Data sources: BepiColombo: MPO-MAG (IGEP/IWF/ISAS/IC), PSP (FIELDS, UCB), Solar Orbiter (MAG, IC)', fontsize=fsize-2, ha='right',color=backcolor)
#signature
plt.figtext(0.99,0.01/2,'Möstl, Weiss, Bailey, Reiss / Helio4Cast', fontsize=fsize-4, ha='right',color=backcolor)
#save figure
framestr = '%05i' % (k)
filename=outputdirectory+'/pos_anim_'+framestr+'.jpg'
if k==0: print(filename)
plt.savefig(filename,dpi=200,facecolor=fig.get_facecolor(), edgecolor='none')
#plt.clf()
#if close==True: plt.close('all')
plt.close('all')
########################################### loop end
def make_frame2(k):
'''
loop each frame in multiprocessing
'''
fig=plt.figure(1, figsize=(19.2,10.8), dpi=100) #full hd
#fig=plt.figure(1, figsize=(19.2*2,10.8*2), dpi=100) #4k
ax = plt.subplot2grid((7,2), (0, 0), rowspan=7, projection='polar')
backcolor='black'
psp_color='black'
bepi_color='blue'
solo_color='coral'
frame_time_str=str(mdates.num2date(frame_time_num+k*res_in_days))
print( 'current frame_time_num', frame_time_str, ' ',k)
#these have their own times
dct=frame_time_num+k*res_in_days-psp.time
psp_timeind=np.argmin(abs(dct))
dct=frame_time_num+k*res_in_days-bepi.time
bepi_timeind=np.argmin(abs(dct))
dct=frame_time_num+k*res_in_days-solo.time
solo_timeind=np.argmin(abs(dct))
#all same times
dct=frame_time_num+k*res_in_days-earth.time
earth_timeind=np.argmin(abs(dct))
#plot all positions including text R lon lat for some
#white background
ax.scatter(venus.lon[earth_timeind], venus.r[earth_timeind]*np.cos(venus.lat[earth_timeind]), s=symsize_planet, c='orange', alpha=1,lw=0,zorder=3)
ax.scatter(mercury.lon[earth_timeind], mercury.r[earth_timeind]*np.cos(mercury.lat[earth_timeind]), s=symsize_planet, c='dimgrey', alpha=1,lw=0,zorder=3)
ax.scatter(earth.lon[earth_timeind], earth.r[earth_timeind]*np.cos(earth.lat[earth_timeind]), s=symsize_planet, c='mediumseagreen', alpha=1,lw=0,zorder=3)
ax.scatter(sta.lon[earth_timeind], sta.r[earth_timeind]*np.cos(sta.lat[earth_timeind]), s=symsize_spacecraft, c='red', marker='s', alpha=1,lw=0,zorder=3)
ax.scatter(mars.lon[earth_timeind], mars.r[earth_timeind]*np.cos(mars.lat[earth_timeind]), s=symsize_planet, c='orangered', alpha=1,lw=0,zorder=3)
#plot stereoa fov hi1/2
hp.plot_stereo_hi_fov(sta,frame_time_num, earth_timeind, ax,'A')
#positions text
f10=plt.figtext(0.01,0.93,' R lon lat', fontsize=fsize+2, ha='left',color=backcolor)
if frame=='HEEQ': earth_text='Earth: '+str(f'{earth.r[earth_timeind]:6.2f}')+str(f'{0.0:8.1f}')+str(f'{np.rad2deg(earth.lat[earth_timeind]):8.1f}')
else: earth_text='Earth: '+str(f'{earth.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(earth.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(earth.lat[earth_timeind]):8.1f}')
mars_text='Mars: '+str(f'{mars.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(mars.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(mars.lat[earth_timeind]):8.1f}')
sta_text='STA: '+str(f'{sta.r[earth_timeind]:6.2f}')+str(f'{np.rad2deg(sta.lon[earth_timeind]):8.1f}')+str(f'{np.rad2deg(sta.lat[earth_timeind]):8.1f}')
#position and text
if psp_timeind > 0:
#plot trajectorie
ax.scatter(psp.lon[psp_timeind], psp.r[psp_timeind]*np.cos(psp.lat[psp_timeind]), s=symsize_spacecraft, c=psp_color, marker='s', alpha=1,lw=0,zorder=3)
#plot positiona as text
psp_text='PSP: '+str(f'{psp.r[psp_timeind]:6.2f}')+str(f'{np.rad2deg(psp.lon[psp_timeind]):8.1f}')+str(f'{np.rad2deg(psp.lat[psp_timeind]):8.1f}')
f5=plt.figtext(0.01,0.78,psp_text, fontsize=fsize, ha='left',color=psp_color)
if plot_orbit:
fadestart=psp_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(psp.lon[fadestart:psp_timeind+fadeind], psp.r[fadestart:psp_timeind+fadeind]*np.cos(psp.lat[fadestart:psp_timeind+fadeind]), c=psp_color, alpha=0.6,lw=1,zorder=3)
if bepi_timeind > 0:
ax.scatter(bepi.lon[bepi_timeind], bepi.r[bepi_timeind]*np.cos(bepi.lat[bepi_timeind]), s=symsize_spacecraft, c=bepi_color, marker='s', alpha=1,lw=0,zorder=3)
bepi_text='Bepi: '+str(f'{bepi.r[bepi_timeind]:6.2f}')+str(f'{np.rad2deg(bepi.lon[bepi_timeind]):8.1f}')+str(f'{np.rad2deg(bepi.lat[bepi_timeind]):8.1f}')
f6=plt.figtext(0.01,0.74,bepi_text, fontsize=fsize, ha='left',color=bepi_color)
if plot_orbit:
fadestart=bepi_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(bepi.lon[fadestart:bepi_timeind+fadeind], bepi.r[fadestart:bepi_timeind+fadeind]*np.cos(bepi.lat[fadestart:bepi_timeind+fadeind]), c=bepi_color, alpha=0.6,lw=1,zorder=3)
if solo_timeind > 0:
ax.scatter(solo.lon[solo_timeind], solo.r[solo_timeind]*np.cos(solo.lat[solo_timeind]), s=symsize_spacecraft, c=solo_color, marker='s', alpha=1,lw=0,zorder=3)
solo_text='SolO: '+str(f'{solo.r[solo_timeind]:6.2f}')+str(f'{np.rad2deg(solo.lon[solo_timeind]):8.1f}')+str(f'{np.rad2deg(solo.lat[solo_timeind]):8.1f}')
f7=plt.figtext(0.01,0.7,solo_text, fontsize=fsize, ha='left',color=solo_color)
if plot_orbit:
fadestart=solo_timeind-fadeind
if fadestart < 0: fadestart=0
ax.plot(solo.lon[fadestart:solo_timeind+fadeind], solo.r[fadestart:solo_timeind+fadeind]*np.cos(solo.lat[fadestart:solo_timeind+fadeind]), c=solo_color, alpha=0.6,lw=1,zorder=3)
f10=plt.figtext(0.01,0.9,earth_text, fontsize=fsize, ha='left',color='mediumseagreen')
f9=plt.figtext(0.01,0.86,mars_text, fontsize=fsize, ha='left',color='orangered')
f8=plt.figtext(0.01,0.82,sta_text, fontsize=fsize, ha='left',color='red')
######################## 1 plot all active CME circles
plot_hi_geo=True
if plot_hi_geo:
lamda=30
#check for active CME indices from HIGeoCAT (with the lists produced above in this notebook)
#check where time is identical to frame time
cmeind=np.where(hc_time_num == frame_time_num+k*res_in_days)
#print(cmeind)
#plot all active CME circles
#if np.size(cmeind) >0:
for p in range(0,np.size(cmeind)):
#print p, h.all_apex_long[cmeind[0][p]], h.all_apex_r[cmeind[0][p]]
#central d
dir=np.array([np.cos(hc_lon[cmeind[0][p]]*np.pi/180),np.sin(hc_lon[cmeind[0][p]]*np.pi/180)])*hc_r[cmeind[0][p]]
#points on circle, correct for longitude
circ_ang = ((np.arange(111)*2-20)*np.pi/180)-(hc_lon[cmeind[0][p]]*np.pi/180)
#these equations are from moestl and davies 2013
xc = 0+dir[0]/(1+np.sin(lamda*np.pi/180)) + (hc_r[cmeind[0][p]]*np.sin(lamda*np.pi/180)/(1+np.sin(lamda*np.pi/180)))*np.sin(circ_ang)
yc = 0+dir[1]/(1+np.sin(lamda*np.pi/180)) + (hc_r[cmeind[0][p]]*np.sin(lamda*np.pi/180)/(1+np.sin(lamda*np.pi/180)))*np.cos(circ_ang)
#now convert to polar coordinates
rcirc=np.sqrt(xc**2+yc**2)
longcirc=np.arctan2(yc,xc)
#plot in correct color
if hc_name[cmeind[0][p]] == 'A':
#make alpha dependent on distance to solar equatorial plane - maximum latitude is -40/+40 -
#so to make also the -/+40 latitude CME visible, divide by 50 so alpha > 0 for these events
ax.plot(longcirc,rcirc, c='red', alpha=1-abs(hc_lat[cmeind[0][p]]/50), lw=1.5)
if hc_name[cmeind[0][p]] == 'B':
ax.plot(longcirc,rcirc, c='royalblue', alpha=1-abs(hc_lat[cmeind[0][p]]/50), lw=1.5)
#set axes and grid
ax.set_theta_zero_location('E')
#plt.thetagrids(range(0,360,45),(u'0\u00b0 '+frame+' longitude',u'45\u00b0',u'90\u00b0',u'135\u00b0',u'+/- 180\u00b0',u'- 135\u00b0',u'- 90\u00b0',u'- 45\u00b0'), ha='right', fmt='%d',fontsize=fsize-1,color=backcolor, alpha=0.9)
plt.thetagrids(range(0,360,45),(u'0\u00b0',u'45\u00b0',u'90\u00b0',u'135\u00b0',u'+/- 180\u00b0',u'- 135\u00b0',u'- 90\u00b0',u'- 45\u00b0'), ha='center', fmt='%d',fontsize=fsize-1,color=backcolor, alpha=0.9,zorder=4)
#plt.rgrids((0.10,0.39,0.72,1.00,1.52),('0.10','0.39','0.72','1.0','1.52 AU'),angle=125, fontsize=fsize,alpha=0.9, color=backcolor)
plt.rgrids((0.1,0.3,0.5,0.7,1.0),('0.10','0.3','0.5','0.7','1.0 AU'),angle=125, fontsize=fsize-3,alpha=0.5, color=backcolor)
#ax.set_ylim(0, 1.75) #with Mars
ax.set_ylim(0, 1.2)
#Sun
ax.scatter(0,0,s=100,c='yellow',alpha=1, edgecolors='black', linewidth=0.3)
#------------------------------------------------ IN SITU DATA ------------------------------------------------------
time_now=frame_time_num+k*res_in_days
#cut data for plot window so faster
windex1=np.where(w_time_num > time_now-days_window)[0][0]
windex2=np.where(w_time_num > time_now+days_window)[0][0]
w=w1[windex1:windex2]
sindex1=np.where(s_time_num > time_now-days_window)[0][0]
sindex2=np.where(s_time_num > time_now+days_window)[0][0]
s=s1[sindex1:sindex2]
#is data available from new missions?
if p_time_num[-1] > time_now+days_window:
pindex1=np.where(p_time_num > time_now-days_window)[0][0]
pindex2=np.where(p_time_num > time_now+days_window)[0][0]
#pindex2=np.size(p1)-1
p=p1[pindex1:pindex2]
elif np.logical_and((p_time_num[-1] < time_now+days_window),(p_time_num[-1] > time_now-days_window)):
pindex1=np.where(p_time_num > time_now-days_window)[0][0]
pindex2=np.size(p1)-1
p=p1[pindex1:pindex2]
else: p=[]
if o_time_num[-1] > time_now+days_window:
oindex1=np.where(o_time_num > time_now-days_window)[0][0]
oindex2=np.where(o_time_num > time_now+days_window)[0][0]
#use last index oindex2=np.size(o1)-1
o=o1[oindex1:oindex2]
elif np.logical_and((o_time_num[-1] < time_now+days_window),(o_time_num[-1] > time_now-days_window)):
oindex1=np.where(o_time_num > time_now-days_window)[0][0]
oindex2=np.size(o1)-1
o=o1[oindex1:oindex2]
else: o=[]
if b_time_num[-1] > time_now+days_window:
bindex1=np.where(b_time_num > time_now-days_window)[0][0]
bindex2=np.where(b_time_num > time_now+days_window)[0][0]
#bindex2=np.size(b1)-1
b=b1[bindex1:bindex2]
else: b=[]
#---------------- Wind mag
ax4 = plt.subplot2grid((7,2), (0, 1))
#plt.plot_date(w_tm,wbx,'-r',label='BR',linewidth=0.5)
#plt.plot_date(w_tm,wby,'-g',label='BT',linewidth=0.5)
#plt.plot_date(w_tm,wbz,'-b',label='BN',linewidth=0.5)
#plt.plot_date(w_tm,wbt,'-k',label='Btotal',lw=0.5)
plt.plot_date(w.time,w.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(w.time,w.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(w.time,w.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(w.time,w.bt,'-k',label='Btotal',lw=0.5)
ax4.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax4.set_ylabel('B [nT] HEEQ',fontsize=fsize-1)
ax4.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax4.set_xlim(time_now-days_window,time_now+days_window)
ax4.set_ylim(np.nanmin(-w.bt)-5, np.nanmax(w.bt)+5)
#plt.ylim((-18, 18))
plt.yticks(fontsize=fsize-1)
ax4.set_xticklabels([])
#---------------- STEREO-A mag
ax6 = plt.subplot2grid((7,2), (1, 1))
#plt.plot_date(s_tm,sbx,'-r',label='BR',linewidth=0.5)
#plt.plot_date(s_tm,sby,'-g',label='BT',linewidth=0.5)
#plt.plot_date(s_tm,sbz,'-b',label='BN',linewidth=0.5)
#plt.plot_date(s_tm,sbt,'-k',label='Btotal')
plt.plot_date(s.time,s.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(s.time,s.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(s.time,s.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(s.time,s.bt,'-k',label='Btotal',linewidth=0.5)
ax6.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax6.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
#ax6.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax6.set_xlim(time_now-days_window,time_now+days_window)
ax6.set_xticklabels([])
ax6.set_ylim(np.nanmin(-s.bt)-5, np.nanmax(s.bt)+5)
plt.yticks(fontsize=fsize-1)
plt.tick_params( axis='x', labelbottom='off')
#plt.ylim((-18, 18))
#---------------- STEREO, Wind speed
ax5 = plt.subplot2grid((7,2), (2, 1))
plt.plot_date(w.time,w.vt,'-g',label='Wind',linewidth=0.7)
plt.plot_date(s.time,s.vt,'-r',label='STEREO-A',linewidth=0.7)
#ax5.legend(loc=1, fontsize=10)
ax5.plot_date([time_now,time_now], [0,900],'-k', lw=0.5, alpha=0.8)
ax5.set_xlim(time_now-days_window,time_now+days_window)
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.ylim((240, 750))
plt.yticks(fontsize=fsize-1)
ax5.set_xticklabels([])
#ax7 = plt.subplot2grid((6,2), (5, 1))
#plt.plot_date(s.time,s.vt,'-k',label='V',linewidth=0.7)
#ax7.plot_date([time_now,time_now], [0,800],'-k', lw=0.5, alpha=0.8)
#ax7.set_xlim(time_now-days_window,time_now+days_window)
#ax7.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
#plt.ylabel('V [km/s]',fontsize=fsize-1)
#plt.tick_params(axis='x', labelbottom='off')
#plt.ylim((240, 810))
#plt.yticks(fontsize=fsize-1)
#plt.xticks(fontsize=fsize)
#---------------------- PSP speed
ax3 = plt.subplot2grid((7,2), (3, 1))
ax3.plot_date([time_now,time_now], [0,1000],'-k', lw=0.5, alpha=0.8)
ax3.set_xticklabels([])
ax3.set_xlim(time_now-days_window,time_now+days_window)
ax3.set_ylim((240, 810))
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
ax3.set_xticklabels([])
if np.size(p)>0:
#plt.plot_date(p_tp,pv,'-k',label='V',linewidth=0.5)
plt.plot_date(p.time,p.vt,'-k',label='V',linewidth=0.7)
ax3.set_xlim(time_now-days_window,time_now+days_window)
ax3.plot_date([time_now,time_now], [0,800],'-k', lw=0.5, alpha=0.8)
ax3.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
plt.ylabel('V [km/s]',fontsize=fsize-1)
plt.ylim((240, 750))
plt.yticks(fontsize=fsize-1)
ax3.set_xticklabels([])
#---------------------- PSP mag
ax2 = plt.subplot2grid((7,2), (4, 1))
ax2.plot_date([time_now,time_now], [-1000,1000],'-k', lw=0.5, alpha=0.8)
ax2.set_xticklabels([])
ax2.set_xlim(time_now-days_window,time_now+days_window)
ax2.set_ylim((-18, 18))
ax2.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
#when there is data, plot:
if np.size(p)>0:
plt.plot_date(p.time,p.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(p.time,p.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(p.time,p.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(p.time,p.bt,'-k',label='Btotal',lw=0.5)
ax2.plot_date([time_now,time_now], [-1000,1000],'-k', lw=0.5, alpha=0.8)
ax2.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax2.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax2.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmin(-p.bt)): ax2.set_ylim(np.nanmin(-p.bt)-5, np.nanmax(p.bt)+5)
ax2.set_xticklabels([])
plt.yticks(fontsize=fsize-1)
#---------------------- SolO mag
ax7 = plt.subplot2grid((7,2), (5, 1))
ax7.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax7.set_xticklabels([])
ax7.set_xlim(time_now-days_window,time_now+days_window)
ax7.set_ylim((-18, 18))
ax7.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
ax7.set_xticklabels([])
#when there is data, plot:
if np.size(o)>0:
plt.plot_date(o.time,o.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(o.time,o.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(o.time,o.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(o.time,o.bt,'-k',label='Btotal',lw=0.5)
ax7.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax7.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax7.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmax(o.bt)):
ax7.set_ylim((np.nanmin(-o.bt)-5, np.nanmax(o.bt)+5))
else:
ax7.set_ylim((-15, 15))
ax7.set_xticklabels([])
plt.yticks(fontsize=fsize-1)
#---------------------- Bepi mag
ax8 = plt.subplot2grid((7,2), (6, 1))
ax8.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax8.set_xlim(time_now-days_window,time_now+days_window)
ax8.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax8.set_ylim((-18, 18))
ax8.set_ylabel('B [nT] RTN',fontsize=fsize-1)
plt.yticks(fontsize=fsize-1)
if np.size(b)>0:
plt.plot_date(b.time,b.bx,'-r',label='BR',linewidth=0.5)
plt.plot_date(b.time,b.by,'-g',label='BT',linewidth=0.5)
plt.plot_date(b.time,b.bz,'-b',label='BN',linewidth=0.5)
plt.plot_date(b.time,b.bt,'-k',label='Btotal',lw=0.5)
ax8.plot_date([time_now,time_now], [-100,100],'-k', lw=0.5, alpha=0.8)
ax8.set_ylabel('B [nT] RTN',fontsize=fsize-1)
ax8.xaxis.set_major_formatter( matplotlib.dates.DateFormatter('%b-%d') )
ax8.set_xlim(time_now-days_window,time_now+days_window)
if np.isfinite(np.nanmax(b.bt)):
ax8.set_ylim((np.nanmin(-b.bt)-5, np.nanmax(b.bt)+5))
else:
ax8.set_ylim((-15, 15))
#ax8.set_ylim((np.nanmin(-b.bt)-5, np.nanmax(b.bt)+5))
plt.yticks(fontsize=fsize-1)
plt.figtext(0.95,0.82,'Wind', color='mediumseagreen', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.71,'STEREO-A', color='red', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.63,'Wind', color='mediumseagreen', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.58,'STEREO-A', color='red', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.49,'PSP ', color='black', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.38,'PSP ', color='black', ha='center',fontsize=fsize+3)
plt.figtext(0.95,0.28,'Solar Orbiter', color='coral', ha='center',fontsize=fsize+5)
plt.figtext(0.95,0.16,'BepiColombo', color='blue', ha='center',fontsize=fsize+5)
############################
#plot text for date extra so it does not move
#year
f1=plt.figtext(0.45,0.93,frame_time_str[0:4], ha='center',color=backcolor,fontsize=fsize+6)
#month
f2=plt.figtext(0.45+0.04,0.93,frame_time_str[5:7], ha='center',color=backcolor,fontsize=fsize+6)
#day
f3=plt.figtext(0.45+0.08,0.93,frame_time_str[8:10], ha='center',color=backcolor,fontsize=fsize+6)
#hours
f4=plt.figtext(0.45+0.12,0.93,frame_time_str[11:13], ha='center',color=backcolor,fontsize=fsize+6)
plt.figtext(0.02, 0.02,'Spacecraft trajectories in '+frame+' coordinates', fontsize=fsize-1, ha='left',color=backcolor)
plt.figtext(0.32,0.02,'――― trajectory from - 60 days to + 60 days', color='black', ha='center',fontsize=fsize-1)
#signature
#BC MPO-MAG (IGEP/IWF/ISAS/IC)
#auch für Solar Orbiter (MAG, IC), Parker (FIELDS, UCB), STA (IMPACT/PLASTIC, UNH, UCLA), Wind (MFI, SWE, NASA??) STA-HI (RAL)
plt.figtext(0.85,0.02,'Data sources: BepiColombo: MPO-MAG (IGEP/IWF/ISAS/IC), PSP (FIELDS, UCB), Solar Orbiter (MAG, IC)', fontsize=fsize-2, ha='right',color=backcolor)
#signature
plt.figtext(0.99,0.01/2,'Möstl, Weiss, Bailey, Reiss / Helio4Cast', fontsize=fsize-4, ha='right',color=backcolor)
categories = np.array([0, 2, 1, 1, 1, 2, 0, 0])
colormap = np.array(['r', 'g', 'b'])
steps=60
#parker spiral
if plot_parker:
for q in np.arange(0,steps):
omega=2*np.pi/(sun_rot*60*60*24) #solar rotation in seconds
v=400/AUkm #km/s
r0=695000/AUkm
r=v/omega*theta+r0*7
windcolor=cmap.hot(w.vt[2315]/5)
#print(windcolor)
#print(w.vt[2315+q*10])
ax.plot(-theta+np.deg2rad(0+(360/24.47)*res_in_days*k+360/steps*q), r, alpha=0.1, lw=5.0,color=windcolor, zorder=2)
#print(theta)
#save figure
framestr = '%05i' % (k)
filename=outputdirectory+'/pos_anim_'+framestr+'.jpg'
if k==0: print(filename)
plt.savefig(filename,dpi=200,facecolor=fig.get_facecolor(), edgecolor='none')
#plt.clf()
#if close==True: plt.close('all')
filename='lineups/pos_anim_'+framestr+'.png'
plt.savefig(filename,dpi=200,facecolor=fig.get_facecolor(), edgecolor='none')
filename='lineups/pos_anim_'+framestr+'.jpg'
plt.savefig(filename,dpi=100,facecolor=fig.get_facecolor(), edgecolor='none')
#plt.close('all')
########################################### loop end
#for multipoint lineup paper
#june event
#make_frame2(3810)
#nov event
make_frame2(10910)
```
### get data
```
get_data=1
if get_data > 0:
file=data_path+'wind_2018_now_heeq.p'
[w,wh]=pickle.load(open(file, "rb" ) )
#function for spike removal, see list with times in that function
w=hd.remove_wind_spikes_gaps(w)
#cut with 2018 Oct 1
wcut=np.where(w.time> parse_time('2018-10-01').datetime)[0][0]
w=w[wcut:-1]
#file=data_path+'stereoa_2007_2019_sceq.p'
#[s,sh]=pickle.load(open(file, "rb" ) )
#file=data_path+'stereoa_2019_now_sceq.p'
########### STA
print('load and merge STEREO-A data SCEQ') #yearly magplasma files from stereo science center, conversion to SCEQ
filesta1='stereoa_2007_2020_rtn.p'
sta1=pickle.load(open(data_path+filesta1, "rb" ) )
#beacon data
#filesta2="stereoa_2019_2020_sceq_beacon.p"
#filesta2='stereoa_2019_2020_sept_sceq_beacon.p'
#filesta2='stereoa_2019_now_sceq_beacon.p'
#filesta2="stereoa_2020_august_november_rtn_beacon.p"
filesta2='stereoa_2020_now_sceq_beacon.p'
[sta2,hsta2]=pickle.load(open(data_path+filesta2, "rb" ) )
#sta2=sta2[np.where(sta2.time >= parse_time('2020-Aug-01 00:00').datetime)[0]]
#make array
sta=np.zeros(np.size(sta1.time)+np.size(sta2.time),dtype=[('time',object),('bx', float),('by', float),\
('bz', float),('bt', float),('vt', float),('np', float),('tp', float),\
('x', float),('y', float),('z', float),\
('r', float),('lat', float),('lon', float)])
#convert to recarray
sta = sta.view(np.recarray)
sta.time=np.hstack((sta1.time,sta2.time))
sta.bx=np.hstack((sta1.bx,sta2.bx))
sta.by=np.hstack((sta1.by,sta2.by))
sta.bz=np.hstack((sta1.bz,sta2.bz))
sta.bt=np.hstack((sta1.bt,sta2.bt))
sta.vt=np.hstack((sta1.vt,sta2.vt))
sta.np=np.hstack((sta1.np,sta2.np))
sta.tp=np.hstack((sta1.tp,sta2.tp))
sta.x=np.hstack((sta1.x,sta2.x))
sta.y=np.hstack((sta1.y,sta2.y))
sta.z=np.hstack((sta1.z,sta2.z))
sta.r=np.hstack((sta1.r,sta2.r))
sta.lon=np.hstack((sta1.lon,sta2.lon))
sta.lat=np.hstack((sta1.lat,sta2.lat))
print('STA Merging done')
#cut with 2018 Oct 1
scut=np.where(sta.time> parse_time('2018-10-01').datetime)[0][0]
s=sta[scut:-1]
######### Bepi
file=data_path+'bepi_2019_2021_rtn.p'
b1=pickle.load(open(file, "rb" ) )
file=data_path+'bepi_2021_ib_rtn.p'
b2=pickle.load(open(file, "rb" ) )
#make array
b=np.zeros(np.size(b1.time)+np.size(b2.time),dtype=[('time',object),('bx', float),('by', float),\
('bz', float),('bt', float),\
('x', float),('y', float),('z', float),\
('r', float),('lat', float),('lon', float)])
#convert to recarray
b = b.view(np.recarray)
b.time=np.hstack((b1.time,b2.time))
b.bx=np.hstack((b1.bx,b2.bx))
b.by=np.hstack((b1.by,b2.by))
b.bz=np.hstack((b1.bz,b2.bz))
b.bt=np.hstack((b1.bt,b2.bt))
b.x=np.hstack((b1.x,b2.x))
b.y=np.hstack((b1.y,b2.y))
b.z=np.hstack((b1.z,b2.z))
b.r=np.hstack((b1.r,b2.r))
b.lon=np.hstack((b1.lon,b2.lon))
b.lat=np.hstack((b1.lat,b2.lat))
print('Bepi Merging done')
#################################### PSP, SolO
file=data_path+'psp_2018_2021_rtn.p'
[p,ph]=pickle.load(open(file, "rb" ) )
file=data_path+'solo_2020_april_2021_july_rtn.p'
o=pickle.load(open(file, "rb" ) )
#save data for faster use
file='data/movie_data_aug21.p'
pickle.dump([p,w,s,o,b], open(file, 'wb'))
print('load data from data/movie_data_aug21.p')
[p1,w1,s1,o1,b1]=pickle.load(open('data/movie_data_aug21.p', "rb" ) )
p_time_num=parse_time(p1.time).plot_date
w_time_num=parse_time(w1.time).plot_date
s_time_num=parse_time(s1.time).plot_date
o_time_num=parse_time(o1.time).plot_date
b_time_num=parse_time(b1.time).plot_date
#median filter psp speed because of spikes
p1.vt=medfilt(p1.vt,31)
print('done')
```
# Make movie
### Settings
```
plt.close('all')
#Coordinate System
#frame='HCI'
frame='HEEQ'
print(frame)
#sidereal solar rotation rate
if frame=='HCI': sun_rot=24.47
#synodic
if frame=='HEEQ': sun_rot=26.24
AUkm=149597870.7
#black background on or off
#black=True
black=False
#animation settings
plot_orbit=True
#plot_orbit=False
plot_parker=True
#plot_parker=False
high_res_mode=False
#orbit 1
#outputdirectory='results/anim_plots_sc_insitu_final_orbit1'
#animdirectory='results/anim_movie_sc_insitu_final_orbit1'
#t_start ='2018-Oct-15'
#t_end ='2018-Dec-06'
#t_start ='2018-Dec-03'
#t_end ='2018-Dec-06'
#orbit all
#from Parker start
#outputdirectory='results/overview_movie_nov_2020_frames_2'
#animdirectory='results/overview_movie_nov_2020_2'
#t_start ='2018-Oct-25'
#t_end ='2020-Apr-15'
#res_in_days=1/24. #1hour =1/24
#make time range
#time_array = [ parse_time(t_start).datetime + timedelta(hours=1*n) \
# for n in range(int ((parse_time(t_end).datetime - parse_time(t_start).datetime).days*24))]
######## from Solar Orbiter Start
outputdirectory='results/overview_movie_apr21_sep21_frames'
animdirectory='results/overview_movie_apr21_sep21'
t_start ='2021-Apr-1'
t_end ='2021-Sep-30'
#t_end ='2021-Jun-20'
res_in_days=1/48. #1hour =1/24
#make time range to see how much frames are needed
starttime = parse_time(t_start).datetime
endtime = parse_time(t_end).datetime
alltimes = []
while starttime < endtime:
alltimes.append(starttime)
starttime += timedelta(days=res_in_days)
k_all=np.size(alltimes)
days_window=3 #size of in situ timerange
if os.path.isdir(outputdirectory) == False: os.mkdir(outputdirectory)
if os.path.isdir(animdirectory) == False: os.mkdir(animdirectory)
positions_plot_directory='results/plots_positions/'
if os.path.isdir(positions_plot_directory) == False: os.mkdir(positions_plot_directory)
print(k_all)
########## MAKE TRAJECTORIES
#make_positions()
print('load positions')
#load positions
[psp, bepi, solo, sta, stb, messenger, ulysses, earth, venus, mars, mercury,jupiter, saturn, uranus, neptune,frame]=pickle.load( open( 'results/positions_HEEQ_1hr.p', "rb" ) )
print('load HIGEOCAT kinematics')
[hc_time,hc_r,hc_lat,hc_lon,hc_name]=pickle.load(open('data/higeocat_kinematics.p', "rb"))
print('done')
```
## test animation frames
```
#for server
#matplotlib.use('Qt5Agg')
%matplotlib inline
start_time=time.time()
print()
print('make animation')
#animation start time in matplotlib format
frame_time_num=parse_time(t_start).plot_date
sns.set_context('talk')
if not black: sns.set_style('darkgrid'),#{'grid.linestyle': ':', 'grid.color': '.35'})
if black: sns.set_style('white',{'grid.linestyle': ':', 'grid.color': '.35'})
# animation settings
fsize=13
fadeind=int(60/res_in_days)
symsize_planet=110
symsize_spacecraft=80
#for parker spiral
theta=np.arange(0,np.deg2rad(180),0.01)
######################## make frames
#for debugging
#don't close plot in make_frame when testing
make_frame2(5500)
#for i in np.arange(6454,6576,1):
# make_frame(i)
print('done')
```
## Make full movie
```
matplotlib.use('Agg')
print(k_all,' frames in total')
print()
#number of processes depends on your machines memory; check with command line "top"
#how much memory is used by all your processesii
nr_of_processes_used=100
print('Using multiprocessing, nr of cores',multiprocessing.cpu_count(), \
'with nr of processes used: ',nr_of_processes_used)
#run multiprocessing pool to make all movie frames, depending only on frame number
pool = multiprocessing.Pool(processes=nr_of_processes_used)
input=[i for i in range(k_all)]
#input=[i for i in np.arange(6721,6851,1)]
pool.map(make_frame, input)
pool.close()
# pool.join()
print('time in min: ',np.round((time.time()-start_time)/60))
print('plots done, frames saved in ',outputdirectory)
#os.system(ffmpeg_path+'ffmpeg -r 30 -i '+str(outputdirectory)+'/pos_anim_%05d.jpg -b 5000k \
# -r 30 '+str(animdirectory)+'/overview_27nov_2020_from2018.mp4 -y -loglevel quiet')
#os.system(ffmpeg_path+'ffmpeg -r 30 -i '+str(outputdirectory)+'/pos_anim_%05d.jpg -b 5000k \
# -r 30 '+str(animdirectory)+'/overview_apr2020_jul2021.mp4 -y -loglevel quiet')
os.system(ffmpeg_path+'ffmpeg -r 30 -i '+str(outputdirectory)+'/pos_anim_%05d.jpg -b 5000k \
-r 30 '+str(animdirectory)+'/overview_apr2021_sep2021.mp4 -y -loglevel quiet')
print('movie done, saved in ',animdirectory)
```
## Lineup event images
```
#load lineup catalog
url='lineups/HELIO4CAST_multipoint_v10.csv'
lineups=pd.read_csv(url)
#alltimes are the movie frame times
#time of event 1
etime1=parse_time(lineups['event_start_time'][1]).datetime
eframe1=np.where(np.array(alltimes)> etime1)[0][0]
make_frame2(eframe1)
plt.close('all')
etime2=parse_time(lineups['event_start_time'][6]).datetime
eframe2=np.where(np.array(alltimes)> etime2)[0][0]
make_frame2(eframe2)
plt.close('all')
etime4=parse_time(lineups['event_start_time'][12]).datetime
eframe4=np.where(np.array(alltimes)> etime4)[0][0]
make_frame2(eframe4)
plt.close('all')
etime4_2=parse_time(lineups['event_start_time'][11]).datetime
eframe4_2=np.where(np.array(alltimes)> etime4_2)[0][0]
make_frame2(eframe4_2)
plt.close('all')
etime12=parse_time(lineups['event_start_time'][29]).datetime
eframe12=np.where(np.array(alltimes)> etime12)[0][0]
make_frame2(eframe12)
plt.close('all')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import torch
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error as mse
from scipy.stats import entropy
import warnings
import logging
from causalml.inference.meta import BaseXRegressor, BaseRRegressor, BaseSRegressor, BaseTRegressor
from causalml.inference.nn import CEVAE
from causalml.propensity import ElasticNetPropensityModel
from causalml.metrics import *
from causalml.dataset import simulate_hidden_confounder
%matplotlib inline
warnings.filterwarnings('ignore')
logger = logging.getLogger('causalml')
logger.setLevel(logging.DEBUG)
plt.style.use('fivethirtyeight')
sns.set_palette('Paired')
plt.rcParams['figure.figsize'] = (12,8)
```
# IHDP semi-synthetic dataset
Hill introduced a semi-synthetic dataset constructed from the Infant Health
and Development Program (IHDP). This dataset is based on a randomized experiment
investigating the effect of home visits by specialists on future cognitive scores. The IHDP simulation is considered the de-facto standard benchmark for neural network treatment effect
estimation methods.
```
# load all ihadp data
df = pd.DataFrame()
for i in range(1, 10):
data = pd.read_csv('./data/ihdp_npci_' + str(i) + '.csv', header=None)
df = pd.concat([data, df])
cols = ["treatment", "y_factual", "y_cfactual", "mu0", "mu1"] + [i for i in range(25)]
df.columns = cols
print(df.shape)
# replicate the data 100 times
replications = 100
df = pd.concat([df]*replications, ignore_index=True)
print(df.shape)
# set which features are binary
binfeats = [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
# set which features are continuous
contfeats = [i for i in range(25) if i not in binfeats]
# reorder features with binary first and continuous after
perm = binfeats + contfeats
df = df.reset_index(drop=True)
df.head()
X = df[perm].values
treatment = df['treatment'].values
y = df['y_factual'].values
y_cf = df['y_cfactual'].values
tau = df.apply(lambda d: d['y_factual'] - d['y_cfactual'] if d['treatment']==1
else d['y_cfactual'] - d['y_factual'],
axis=1)
mu_0 = df['mu0'].values
mu_1 = df['mu1'].values
# seperate for train and test
itr, ite = train_test_split(np.arange(X.shape[0]), test_size=0.2, random_state=1)
X_train, treatment_train, y_train, y_cf_train, tau_train, mu_0_train, mu_1_train = X[itr], treatment[itr], y[itr], y_cf[itr], tau[itr], mu_0[itr], mu_1[itr]
X_val, treatment_val, y_val, y_cf_val, tau_val, mu_0_val, mu_1_val = X[ite], treatment[ite], y[ite], y_cf[ite], tau[ite], mu_0[ite], mu_1[ite]
```
## CEVAE Model
```
# cevae model settings
outcome_dist = "normal"
latent_dim = 20
hidden_dim = 200
num_epochs = 5
batch_size = 1000
learning_rate = 0.001
learning_rate_decay = 0.01
num_layers = 2
cevae = CEVAE(outcome_dist=outcome_dist,
latent_dim=latent_dim,
hidden_dim=hidden_dim,
num_epochs=num_epochs,
batch_size=batch_size,
learning_rate=learning_rate,
learning_rate_decay=learning_rate_decay,
num_layers=num_layers)
# fit
losses = cevae.fit(X=torch.tensor(X_train, dtype=torch.float),
treatment=torch.tensor(treatment_train, dtype=torch.float),
y=torch.tensor(y_train, dtype=torch.float))
# predict
ite_train = cevae.predict(X_train)
ite_val = cevae.predict(X_val)
ate_train = ite_train.mean()
ate_val = ite_val.mean()
print(ate_train, ate_val)
```
## Meta Learners
```
# fit propensity model
p_model = ElasticNetPropensityModel()
p_train = p_model.fit_predict(X_train, treatment_train)
p_val = p_model.fit_predict(X_val, treatment_val)
s_learner = BaseSRegressor(LGBMRegressor())
s_ate = s_learner.estimate_ate(X_train, treatment_train, y_train)[0]
s_ite_train = s_learner.fit_predict(X_train, treatment_train, y_train)
s_ite_val = s_learner.predict(X_val)
t_learner = BaseTRegressor(LGBMRegressor())
t_ate = t_learner.estimate_ate(X_train, treatment_train, y_train)[0][0]
t_ite_train = t_learner.fit_predict(X_train, treatment_train, y_train)
t_ite_val = t_learner.predict(X_val, treatment_val, y_val)
x_learner = BaseXRegressor(LGBMRegressor())
x_ate = x_learner.estimate_ate(X_train, treatment_train, y_train, p_train)[0][0]
x_ite_train = x_learner.fit_predict(X_train, treatment_train, y_train, p_train)
x_ite_val = x_learner.predict(X_val, treatment_val, y_val, p_val)
r_learner = BaseRRegressor(LGBMRegressor())
r_ate = r_learner.estimate_ate(X_train, treatment_train, y_train, p_train)[0][0]
r_ite_train = r_learner.fit_predict(X_train, treatment_train, y_train, p_train)
r_ite_val = r_learner.predict(X_val)
```
## Model Results Comparsion
### Training
```
df_preds_train = pd.DataFrame([s_ite_train.ravel(),
t_ite_train.ravel(),
x_ite_train.ravel(),
r_ite_train.ravel(),
ite_train.ravel(),
tau_train.ravel(),
treatment_train.ravel(),
y_train.ravel()],
index=['S','T','X','R','CEVAE','tau','w','y']).T
df_cumgain_train = get_cumgain(df_preds_train)
df_result_train = pd.DataFrame([s_ate, t_ate, x_ate, r_ate, ate_train, tau_train.mean()],
index=['S','T','X','R','CEVAE','actual'], columns=['ATE'])
df_result_train['MAE'] = [mean_absolute_error(t,p) for t,p in zip([s_ite_train, t_ite_train, x_ite_train, r_ite_train, ite_train],
[tau_train.values.reshape(-1,1)]*5 )
] + [None]
df_result_train['AUUC'] = auuc_score(df_preds_train)
df_result_train
plot_gain(df_preds_train)
```
### Validation
```
df_preds_val = pd.DataFrame([s_ite_val.ravel(),
t_ite_val.ravel(),
x_ite_val.ravel(),
r_ite_val.ravel(),
ite_val.ravel(),
tau_val.ravel(),
treatment_val.ravel(),
y_val.ravel()],
index=['S','T','X','R','CEVAE','tau','w','y']).T
df_cumgain_val = get_cumgain(df_preds_val)
df_result_val = pd.DataFrame([s_ite_val.mean(), t_ite_val.mean(), x_ite_val.mean(), r_ite_val.mean(), ate_val, tau_val.mean()],
index=['S','T','X','R','CEVAE','actual'], columns=['ATE'])
df_result_val['MAE'] = [mean_absolute_error(t,p) for t,p in zip([s_ite_val, t_ite_val, x_ite_val, r_ite_val, ite_val],
[tau_val.values.reshape(-1,1)]*5 )
] + [None]
df_result_val['AUUC'] = auuc_score(df_preds_val)
df_result_val
plot_gain(df_preds_val)
```
# Synthetic Data
```
y, X, w, tau, b, e = simulate_hidden_confounder(n=100000, p=5, sigma=1.0, adj=0.)
X_train, X_val, y_train, y_val, w_train, w_val, tau_train, tau_val, b_train, b_val, e_train, e_val = \
train_test_split(X, y, w, tau, b, e, test_size=0.2, random_state=123, shuffle=True)
preds_dict_train = {}
preds_dict_valid = {}
preds_dict_train['Actuals'] = tau_train
preds_dict_valid['Actuals'] = tau_val
preds_dict_train['generated_data'] = {
'y': y_train,
'X': X_train,
'w': w_train,
'tau': tau_train,
'b': b_train,
'e': e_train}
preds_dict_valid['generated_data'] = {
'y': y_val,
'X': X_val,
'w': w_val,
'tau': tau_val,
'b': b_val,
'e': e_val}
# Predict p_hat because e would not be directly observed in real-life
p_model = ElasticNetPropensityModel()
p_hat_train = p_model.fit_predict(X_train, w_train)
p_hat_val = p_model.fit_predict(X_val, w_val)
for base_learner, label_l in zip([BaseSRegressor, BaseTRegressor, BaseXRegressor, BaseRRegressor],
['S', 'T', 'X', 'R']):
for model, label_m in zip([LinearRegression, XGBRegressor], ['LR', 'XGB']):
# RLearner will need to fit on the p_hat
if label_l != 'R':
learner = base_learner(model())
# fit the model on training data only
learner.fit(X=X_train, treatment=w_train, y=y_train)
try:
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train, p=p_hat_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val, p=p_hat_val).flatten()
except TypeError:
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train, treatment=w_train, y=y_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val, treatment=w_val, y=y_val).flatten()
else:
learner = base_learner(model())
learner.fit(X=X_train, p=p_hat_train, treatment=w_train, y=y_train)
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val).flatten()
# cevae model settings
outcome_dist = "normal"
latent_dim = 20
hidden_dim = 200
num_epochs = 5
batch_size = 1000
learning_rate = 1e-3
learning_rate_decay = 0.1
num_layers = 3
num_samples = 10
cevae = CEVAE(outcome_dist=outcome_dist,
latent_dim=latent_dim,
hidden_dim=hidden_dim,
num_epochs=num_epochs,
batch_size=batch_size,
learning_rate=learning_rate,
learning_rate_decay=learning_rate_decay,
num_layers=num_layers,
num_samples=num_samples)
# fit
losses = cevae.fit(X=torch.tensor(X_train, dtype=torch.float),
treatment=torch.tensor(w_train, dtype=torch.float),
y=torch.tensor(y_train, dtype=torch.float))
preds_dict_train['CEVAE'] = cevae.predict(X_train).flatten()
preds_dict_valid['CEVAE'] = cevae.predict(X_val).flatten()
actuals_train = preds_dict_train['Actuals']
actuals_validation = preds_dict_valid['Actuals']
synthetic_summary_train = pd.DataFrame({label: [preds.mean(), mse(preds, actuals_train)] for label, preds
in preds_dict_train.items() if 'generated' not in label.lower()},
index=['ATE', 'MSE']).T
synthetic_summary_train['Abs % Error of ATE'] = np.abs(
(synthetic_summary_train['ATE']/synthetic_summary_train.loc['Actuals', 'ATE']) - 1)
synthetic_summary_validation = pd.DataFrame({label: [preds.mean(), mse(preds, actuals_validation)]
for label, preds in preds_dict_valid.items()
if 'generated' not in label.lower()},
index=['ATE', 'MSE']).T
synthetic_summary_validation['Abs % Error of ATE'] = np.abs(
(synthetic_summary_validation['ATE']/synthetic_summary_validation.loc['Actuals', 'ATE']) - 1)
# calculate kl divergence for training
for label in synthetic_summary_train.index:
stacked_values = np.hstack((preds_dict_train[label], actuals_train))
stacked_low = np.percentile(stacked_values, 0.1)
stacked_high = np.percentile(stacked_values, 99.9)
bins = np.linspace(stacked_low, stacked_high, 100)
distr = np.histogram(preds_dict_train[label], bins=bins)[0]
distr = np.clip(distr/distr.sum(), 0.001, 0.999)
true_distr = np.histogram(actuals_train, bins=bins)[0]
true_distr = np.clip(true_distr/true_distr.sum(), 0.001, 0.999)
kl = entropy(distr, true_distr)
synthetic_summary_train.loc[label, 'KL Divergence'] = kl
# calculate kl divergence for validation
for label in synthetic_summary_validation.index:
stacked_values = np.hstack((preds_dict_valid[label], actuals_validation))
stacked_low = np.percentile(stacked_values, 0.1)
stacked_high = np.percentile(stacked_values, 99.9)
bins = np.linspace(stacked_low, stacked_high, 100)
distr = np.histogram(preds_dict_valid[label], bins=bins)[0]
distr = np.clip(distr/distr.sum(), 0.001, 0.999)
true_distr = np.histogram(actuals_validation, bins=bins)[0]
true_distr = np.clip(true_distr/true_distr.sum(), 0.001, 0.999)
kl = entropy(distr, true_distr)
synthetic_summary_validation.loc[label, 'KL Divergence'] = kl
df_preds_train = pd.DataFrame([preds_dict_train['S Learner (LR)'].ravel(),
preds_dict_train['S Learner (XGB)'].ravel(),
preds_dict_train['T Learner (LR)'].ravel(),
preds_dict_train['T Learner (XGB)'].ravel(),
preds_dict_train['X Learner (LR)'].ravel(),
preds_dict_train['X Learner (XGB)'].ravel(),
preds_dict_train['R Learner (LR)'].ravel(),
preds_dict_train['R Learner (XGB)'].ravel(),
preds_dict_train['CEVAE'].ravel(),
preds_dict_train['generated_data']['tau'].ravel(),
preds_dict_train['generated_data']['w'].ravel(),
preds_dict_train['generated_data']['y'].ravel()],
index=['S Learner (LR)','S Learner (XGB)',
'T Learner (LR)','T Learner (XGB)',
'X Learner (LR)','X Learner (XGB)',
'R Learner (LR)','R Learner (XGB)',
'CEVAE','tau','w','y']).T
synthetic_summary_train['AUUC'] = auuc_score(df_preds_train).iloc[:-1]
df_preds_validation = pd.DataFrame([preds_dict_valid['S Learner (LR)'].ravel(),
preds_dict_valid['S Learner (XGB)'].ravel(),
preds_dict_valid['T Learner (LR)'].ravel(),
preds_dict_valid['T Learner (XGB)'].ravel(),
preds_dict_valid['X Learner (LR)'].ravel(),
preds_dict_valid['X Learner (XGB)'].ravel(),
preds_dict_valid['R Learner (LR)'].ravel(),
preds_dict_valid['R Learner (XGB)'].ravel(),
preds_dict_valid['CEVAE'].ravel(),
preds_dict_valid['generated_data']['tau'].ravel(),
preds_dict_valid['generated_data']['w'].ravel(),
preds_dict_valid['generated_data']['y'].ravel()],
index=['S Learner (LR)','S Learner (XGB)',
'T Learner (LR)','T Learner (XGB)',
'X Learner (LR)','X Learner (XGB)',
'R Learner (LR)','R Learner (XGB)',
'CEVAE','tau','w','y']).T
synthetic_summary_validation['AUUC'] = auuc_score(df_preds_validation).iloc[:-1]
synthetic_summary_train
synthetic_summary_validation
plot_gain(df_preds_train)
plot_gain(df_preds_validation)
```
| github_jupyter |
this notebook will be used to show the performance of the first attempt at learning reward.
first load the trained reward network anbd setup methods.
```
from baselines.common.vec_env import VecFrameStack
from LearningModel.AgentClasses import *
from baselines.common.cmd_util import make_vec_env
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from os import listdir
from os.path import isfile, join
import re
#load the reward network
trainedNetwork = RewardNetwork("")
trainedNetwork.load_state_dict(torch.load("/home/patrick/models/breakout-reward3/fullTest.params", map_location=torch.device('cpu')))
#setup the env
model_path = "/home/patrick/models/BreakoutNoFrameskip-v4-demonstrator3"
env_id = 'BreakoutNoFrameskip-v4'
env_type = 'atari'
env = make_vec_env(env_id, env_type, 1, 0,
wrapper_kwargs={
'clip_rewards': False,
'episode_life': False,
})
env = VecFrameStack(env, 4)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
agent = PPO2Agent(env, 'atari', True)
trainedNetwork.to(device)
#run the agent in the env once and save the ground truth reward and observations
def GetDemoFromAgent(agent, network, env):
trueReward = 0
learnedReward = 0
currentReward = 0
currentObservation = env.reset()
timeSteps = 0
done = False
#run the demo
while True:
trueReward += currentReward
shapedObservation = torch.from_numpy(currentObservation).float().to(device)
reward, abs_reward = network.predict_reward(shapedObservation)
learnedReward += reward.tolist()
action = agent.act(currentObservation, currentReward, done)
currentObservation, currentReward, done, info = env.step(action)
shapedObservations = currentObservation
timeSteps += 1
if done:
trueReward += currentReward
reward, abs_reward = network.predict_reward(shapedObservation)
learnedReward += reward.tolist()
break
print("{}, {}".format(trueReward, learnedReward))
return trueReward, learnedReward
#a method to find all the models in a given dir that are just numbers
def Find_all_Models(model_dir):
checkpoints = []
filesandDirs = listdir(model_dir)
allFiles = []
for i in filesandDirs:
if isfile(join(model_dir, i)):
allFiles.append(i)
for file in allFiles:
if re.match('^[0-9]+$',file.title()):
checkpoints.append(file.title())
return checkpoints
```
now load all models and run each to get demos to run the network on
```
trueRewards = []
learnedRewards = []
models = Find_all_Models(model_path)
for model in models:
agent.load(model_path + "/" + model)
trueReward, learnedReward = GetDemoFromAgent(agent, trainedNetwork, env)
tf.keras.backend.clear_session()
trueRewards.append(trueReward[0])
learnedRewards.append(learnedReward)
maxTrue = max(trueRewards)
minLearned = min(learnedRewards)
normalisedRewards = [x-minLearned for x in learnedRewards]
copyLearned = []
copyTrue = []
for i in range(len(normalisedRewards)):
if normalisedRewards[i] > 1000:
pass
else:
copyLearned.append(normalisedRewards[i])
copyTrue.append(trueRewards[i])
maxLearned = max(copyLearned)
copyLearned = (copyLearned) / (maxLearned / maxTrue)
from matplotlib.pyplot import figure
print("{},{}".format(maxTrue, max(normalisedRewards)))
figure(num=None, figsize=(10, 7), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(np.array(copyTrue), np.array(copyLearned), c='b')
plt.plot(np.arange(500), np.arange(500))
plt.ylabel("Learned reward")
plt.xlabel("Ground truth reward")
plt.title("graph of learned reward against ground truth")
plt.show()
minReward = [min(trueRewards)]
maxReward = [max(trueRewards)]
average = [sum(trueRewards) / len(trueRewards)]
from LearningModel.getAverageReward import *
agent.load("~/models/breakout-reward-RL3/breakout_50M_ppo2")
meanR, minR,maxR, std = getAvgReward(agent, env, 200)
minReward.append(minR)
maxReward.append(maxR)
average.append(meanR)
minReward[0] = min(trueRewards)
maxReward[0] = max(trueRewards)
average[0] = sum(trueRewards) /len(trueRewards)
print("mins: {}, maxs: {}, means: {}".format(minReward, maxReward, average))
# create plot
figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k')
#fig, ax = plt.subplots()
index = np.arange(2)
bar_width = 0.3
opacity = 0.8
rects1 = plt.bar(index, minReward, bar_width,
alpha=opacity,
color='b',
label='Minimum Reward')
rects2 = plt.bar(index + bar_width, average, bar_width,
alpha=opacity,
color='g',
label='Average Reward')
rects3 = plt.bar(index + bar_width +bar_width, maxReward, bar_width,
alpha=opacity,
color='r',
label='Max Reward')
plt.xlabel('Agent')
plt.ylabel('Reward')
plt.title('The min, max and mean reward of the demonstrator and trained agent')
plt.xticks(index + bar_width, ('Demonstrations', 'Trained agent'))
plt.legend()
plt.tight_layout()
plt.show()
```
| github_jupyter |
# 深度学习工具 PyTorch 简介
在此 notebook 中,你将了解 [PyTorch](http://pytorch.org/),一款用于构建和训练神经网络的框架。PyTorch 在很多方面都和 Numpy 数组很像。毕竟,这些 Numpy 数组也是张量。PyTorch 会将这些张量当做输入并使我们能够轻松地将张量移到 GPU 中,以便在训练神经网络时加快处理速度。它还提供了一个自动计算梯度的模块(用于反向传播),以及另一个专门用于构建神经网络的模块。总之,与 TensorFlow 和其他框架相比,PyTorch 与 Python 和 Numpy/Scipy 堆栈更协调。
## 神经网络
深度学习以人工神经网络为基础。人工神经网络大致产生于上世纪 50 年代末。神经网络由多个像神经元一样的单个部分组成,这些部分通常称为单元或直接叫做“神经元”。每个单元都具有一定数量的加权输入。我们对这些加权输入求和,然后将结果传递给激活函数,以获得单元的输出。
<img src="assets/simple_neuron.png" width=400px>
数学公式如下所示:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
对于向量来说,为两个向量的点积/内积:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## 张量
实际上神经网络计算只是对*张量*进行一系列线性代数运算,张量是矩阵的泛化形式。向量是一维张量,矩阵是二维张量,包含 3 个索引的数组是三维张量(例如 RGB 彩色图像)。神经网络的基本数据结构是张量,PyTorch(以及几乎所有其他深度学习框架)都是以张量为基础。
<img src="assets/tensor_examples.svg" width=600px>
这些是基本知识,我们现在来看 PyTorch 如何构建简单的神经网络。
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
我在上面生成了一些数据,我们可以使用该数据获取这个简单网络的输出。这些暂时只是随机数据,之后我们将使用正常数据。我们来看看:
`features = torch.randn((1, 5))` 创建一个形状为 `(1, 5)` 的张量,其中有 1 行和 5 列,包含根据正态分布(均值为 0,标准偏差为 1)随机分布的值。
`weights = torch.randn_like(features)` 创建另一个形状和 `features` 一样的张量,同样包含来自正态分布的值。
最后,`bias = torch.randn((1, 1))` 根据正态分布创建一个值。
和 Numpy 数组一样,PyTorch 张量可以相加、相乘、相减。行为都很类似。但是 PyTorch 张量具有一些优势,例如 GPU 加速,稍后我们会讲解。请计算这个简单单层网络的输出。
> **练习**:计算网络的输出:输入特征为 `features`,权重为 `weights`,偏差为 `bias`。和 Numpy 类似,PyTorch 也有一个对张量求和的 [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) 函数和 `.sum()` 方法。请使用上面定义的函数 `activation` 作为激活函数。
```
## Calculate the output of this network using the weights and bias tensors
```
你可以在同一运算里使用矩阵乘法进行乘法和加法运算。推荐使用矩阵乘法,因为在 GPU 上使用现代库和高效计算资源使矩阵乘法更高效。
如何对特征和权重进行矩阵乘法运算?我们可以使用 [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) 或 [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul),后者更复杂,并支持广播。如果不对`features` 和 `weights` 进行处理,就会报错:
```
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
在任何框架中构建神经网络时,我们都会频繁遇到这种情况。原因是我们的张量不是进行矩阵乘法的正确形状。注意,对于矩阵乘法,第一个张量里的列数必须等于第二个张量里的行数。`features` 和 `weights` 具有相同的形状,即 `(1, 5)`。意味着我们需要更改 `weights` 的形状,以便进行矩阵乘法运算。
**注意:**要查看张量 `tensor` 的形状,请使用 `tensor.shape`。以后也会经常用到。
现在我们有以下几个选择:[`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape)、[`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_) 和 [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view)。
* `weights.reshape(a, b)` 有时候将返回一个新的张量,数据和 `weights` 的一样,大小为 `(a, b)`;有时候返回克隆版,将数据复制到内存的另一个部分。
* `weights.resize_(a, b)` 返回形状不同的相同张量。但是,如果新形状的元素数量比原始张量的少,则会从张量里删除某些元素(但是不会从内存中删除)。如果新形状的元素比原始张量的多,则新元素在内存里未初始化。注意,方法末尾的下划线表示这个方法是**原地**运算。要详细了解如何在 PyTorch 中进行原地运算,请参阅[此论坛话题](https://discuss.pytorch.org/t/what-is-in-place-operation/16244)。
* `weights.view(a, b)` 将返回一个张量,数据和 `weights` 的一样,大小为 `(a, b)`。
我通常使用 `.view()`,但这三个方法对此示例来说都可行。现在,我们可以通过 `weights.view(5, 1)` 变形 `weights`,使其具有 5 行和 1 列。
> **练习**:请使用矩阵乘法计算网络的输出
```
## Calculate the output of this network using matrix multiplication
```
### 堆叠
这就是计算单个神经元的输出的方式。当你将单个单元堆叠为层,并将层堆叠为神经元网络后,你就会发现这个算法的强大之处。一个神经元层的输出变成下一层的输入。对于多个输入单元和输出单元,我们现在需要将权重表示为矩阵。
<img src='assets/multilayer_diagram_weights.png' width=450px>
底部显示的第一个层级是输入,称为**输入层**。中间层称为**隐藏层**,最后一层(右侧)是**输出层**。我们可以再次使用矩阵从数学角度来描述这个网络,然后使用矩阵乘法将每个单元线性组合到一起。例如,可以这样计算隐藏层($h_1$ 和 $h_2$):
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
我们可以将隐藏层当做输出单元的输入,从而得出这个小网络的输出,简单表示为:
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **练习:**使用权重 `W1` 和 `W2` 以及偏差 `B1` 和 `B2` 计算此多层网络的输出。
```
## Your solution here
```
如果计算正确,输出应该为 `tensor([[ 0.3171]])`。
隐藏层数量是网络的参数,通常称为**超参数**,以便与权重和偏差参数区分开。稍后当我们讨论如何训练网络时会提到,层级越多,网络越能够从数据中学习规律并作出准确的预测。
## Numpy 和 Torch 相互转换
加分题!PyTorch 可以实现 Numpy 数组和 Torch 张量之间的转换。Numpy 数组转换为张量数据,可以用 `torch.from_numpy()`。张量数据转换为 Numpy 数组,可以用 `.numpy()` 。
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
Numpy 数组与 Torch 张量之间共享内存,因此如果你原地更改一个对象的值,另一个对象的值也会更改。
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
```
```python
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
size = 2*N + 2*num_landmarks
center = world_size/2.0
omega = np.zeros((size,size))
omega[0][0] = 1
omega[1][1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros((size,1))
xi[0] = center
xi[1] = center
return omega, xi
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
print("N:" + str(N))
print("num_landmarks: " + str(num_landmarks))
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
measurements = []
motions = []
for i in range(len(data)):
measurements.append(data[i][0])
motions.append(data[i][1])
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
k = [] # length of the measurements at each t (0 <= t < N)
#print("measurements: " + str(len(measurements)))
#print("motions: " + str(len(motions)))
#for t in range(N): # it will overshoot the length of measurements
for t in range(len(measurements)):
# length of the measurements
k_i = len(measurements[t])
k.append(k_i)
#print("t: " + str(t) + ", k_i: " + str(k_i))
for i in range(k_i):
index = measurements[t][i][0]
#index = measurements
# add value to Pxi, Pyi in Omega Matrix
omega[2*t][2*t] += 1/measurement_noise
omega[2*t+1][2*t+1] += 1/measurement_noise
# add value to Lx_index, Ly_index in Omega Matrix
omega[2*t ][2*N + 2*index ] -= 1/measurement_noise
omega[2*t+1][2*N + 2*index+1] -= 1/measurement_noise
omega[2*N + 2*index ][2*t ] -= 1/measurement_noise
omega[2*N + 2*index+1][2*t+1] -= 1/measurement_noise
omega[2*N + 2*index ][2*N + 2*index ] += 1/measurement_noise
omega[2*N + 2*index+1][2*N + 2*index+1] += 1/measurement_noise
# add value to Pxi, Pyi in Xi Vector
xi[2*t ] -= measurements[t][i][1]/measurement_noise
xi[2*t+1] -= measurements[t][i][2]/measurement_noise
# add value to Lx_index, Ly_index in Xi Vector
xi[2*N + 2*index ] += measurements[t][i][1]/measurement_noise
xi[2*N + 2*index+1] += measurements[t][i][2]/measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
#for t in range(N): # it will overshoot the length of measurements
for t in range(len(motions)):
# add value to Pxi, Pyi in Omega Matrix
omega[2*t ][2*t ] += 1/motion_noise
omega[2*t+1][2*t+1] += 1/motion_noise
omega[2*t+2][2*t+2] += 1/motion_noise
omega[2*t+3][2*t+3] += 1/motion_noise
omega[2*t ][2*t+2] -= 1/motion_noise
omega[2*t+1][2*t+3] -= 1/motion_noise
omega[2*t+2][2*t ] -= 1/motion_noise
omega[2*t+3][2*t+1] -= 1/motion_noise
# add value to Pxi, Pyi in Xi Vector
xi[2*t ] -= motions[t][0]
xi[2*t+1] -= motions[t][1]
xi[2*t+2] += motions[t][0]
xi[2*t+3] += motions[t][1]
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
omega_inv = np.linalg.inv(np.matrix(omega))
mu = np.dot(omega_inv, xi)
return mu # return `mu`
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, 1, 1)
#mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
**Answer**:
The true values are following:
Robot: [x=79.66487 y=77.17812]
Landmarks: [[58, 42], [52, 28], [63, 30], [20, 14], [97, 53]]
The result of calculated mu are following:
[74.730, 76.280]
[59.521, 46.661], [54.547, 29.230], [65.149, 31.642], [18.768, 12.173], [105.645, 56.867]
I think I had a good solution.
When I set measurement_noise = motion_noise = 1, the results became following:
[78.238, 77.175]
[58.782, 42.321], [53.187, 27.714], [63.788, 30.125], [18.768, 12.173], [98.351, 52.495]
I found that the results gets closer to the true values if I lower the noise parameters.
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 1.0, 1.0)
#mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 1.0, 1.0)
#mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from bokeh.plotting import *
from sklearn.cluster.bicluster import SpectralCoclustering
from bokeh.models import HoverTool, ColumnDataSource
from itertools import product
whisky = pd.read_csv('whiskies.txt')
whisky["Region"] = pd.read_csv('regions.txt')
whisky.head()
whisky.tail()
flavors = whisky.iloc[:, 2:14]
flavors
corr_flavors = pd.DataFrame.corr(flavors)
print(corr_flavors)
plt.figure(figsize=(10,10))
plt.pcolor(corr_flavors)
plt.colorbar()
plt.savefig("corr_flavors.pdf")
corr_whisky = pd.DataFrame.corr(flavors.transpose())
plt.figure(figsize=(10,10))
plt.pcolor(corr_whisky)
plt.colorbar()
plt.savefig("corr_whisky.pdf")
model = SpectralCoclustering(n_clusters=6, random_state=0)
model.fit(corr_whisky)
model.rows_
np.sum(model.rows_, axis=0)
model.row_labels_
whisky['Group'] = pd.Series(model.row_labels_, index=whisky.index)
whisky = whisky.iloc[np.argsort(model.row_labels_)]
whisky = whisky.reset_index(drop=True)
correlations = pd.DataFrame.corr(whisky.iloc[:,2:14].transpose())
correlations = np.array(correlations)
plt.figure(figsize = (14,7))
plt.subplot(121)
plt.pcolor(corr_whisky)
plt.title('Original')
plt.axis('tight')
plt.subplot(122)
plt.pcolor(correlations)
plt.title('Rearranged')
plt.axis('tight')
plt.savefig('correlations.pdf')
# Let's plot a simple 5x5 grid of squares, alternating in color as red and blue.
plot_values = [1, 2, 3, 4, 5]
plot_colors = ["red", "blue"]
# How do we tell Bokeh to plot each point in a grid? Let's use a function that
# finds each combination of values from 1-5.
from itertools import product
grid = list(product(plot_values, plot_values))
print(grid)
# The first value is the x coordinate, and the second value is the y coordinate.
# Let's store these in separate lists.
xs, ys = zip(*grid)
print(xs)
print(ys)
# Now we will make a list of colors, alternating between red and blue.
colors = [plot_colors[i % 2] for i in range(len(grid))]
print(colors)
# Finally, let's determine the strength of transparency (alpha) for each point,
# where 0 is completely transparent.
alphas = np.linspace(0, 1, len(grid))
# Bokeh likes each of these to be stored in a special dataframe, called
# ColumnDataSource. Let's store our coordinates, colors, and alpha values.
source = ColumnDataSource(
data={
"x": xs,
"y": ys,
"colors": colors,
"alphas": alphas,
}
)
# We are ready to make our interactive Bokeh plot!
output_file("Basic_Example.html", title="Basic Example")
fig = figure(tools="hover, save")
fig.rect("x", "y", 0.9, 0.9, source=source, color="colors", alpha="alphas")
hover = fig.select(dict(type=HoverTool))
hover.tooltips = {
"Value": "@x, @y",
}
show(fig)
cluster_colors = ["red", "orange", "green", "blue", "purple", "gray"]
regions = ["Speyside", "Highlands", "Lowlands", "Islands", "Campbelltown", "Islay"]
region_colors = dict(zip(regions, cluster_colors))
region_colors
distilleries = list(whisky.Distillery)
correlation_colors = []
for i in range(len(distilleries)):
for j in range(len(distilleries)):
if correlations[i, j] < 0.70: # if low correlation,
correlation_colors.append('white') # just use white.
else: # otherwise,
if whisky.Group[i] == whisky.Group[j]: # if the groups match,
correlation_colors.append(cluster_colors[whisky.Group[i]]) # color them by their mutual group.
else: # otherwise
correlation_colors.append('lightgray') # color them lightgray.
source = ColumnDataSource(
data = {
"x": np.repeat(distilleries,len(distilleries)),
"y": list(distilleries)*len(distilleries),
"colors": correlation_colors,
"alphas": correlations.flatten(),
"correlations": correlations.flatten(),
}
)
output_file("Whisky Correlations.html", title="Whisky Correlations")
fig = figure(title="Whisky Correlations",
x_axis_location="above", tools="hover,save",
x_range=list(reversed(distilleries)), y_range=distilleries)
fig.grid.grid_line_color = None
fig.axis.axis_line_color = None
fig.axis.major_tick_line_color = None
fig.axis.major_label_text_font_size = "5pt"
fig.xaxis.major_label_orientation = np.pi / 3
fig.rect('x', 'y', .9, .9, source=source,
color='colors', alpha='correlations')
hover = fig.select(dict(type=HoverTool))
hover.tooltips = {
"Whiskies": "@x, @y",
"Correlation": "@correlations",
}
show(fig)
points = [(0,0), (1,2), (3,1)]
xs, ys = zip(*points)
colors = ["red", "blue", "green"]
output_file("Spatial_Example.html", title="Regional Example")
location_source = ColumnDataSource(
data={
"x": xs,
"y": ys,
"colors": colors,
}
)
fig = figure(title = "Title",
x_axis_location = "above", tools="hover, save")
fig.plot_width = 300
fig.plot_height = 380
fig.circle("x", "y",size=10, source=location_source, color='colors', line_color=None)
hover = fig.select(dict(type = HoverTool))
hover.tooltips = {
"Location": "(@x, @y)"
}
show(fig)
def location_plot(title, colors):
output_file(title + ".html")
location_source = ColumnDataSource(
data={
"x": whisky[" Latitude"],
"y": whisky[" Longitude"],
"colors": colors,
"regions": whisky.Region,
"distilleries": whisky.Distillery
}
)
fig = figure(title=title,
x_axis_location="above", tools="hover, save")
fig.plot_width = 400
fig.plot_height = 500
fig.circle("x", "y", size=9, source=location_source, color='colors', line_color=None)
fig.xaxis.major_label_orientation = np.pi / 3
hover = fig.select(dict(type=HoverTool))
hover.tooltips = {
"Distillery": "@distilleries",
"Location": "(@x, @y)"
}
show(fig)
region_cols = [region_colors[i] for i in list(whisky["Region"])]
location_plot("Whisky Locations and Regions", region_cols)
region_cols = [region_colors[i] for i in list(whisky.Region)]
classification_cols = [cluster_colors[i] for i in list(whisky.Group)]
location_plot("Whisky Locations and Regions", region_cols)
location_plot("Whisky Locations and Groups", classification_cols)
```
| github_jupyter |
```
import datetime as dt
import panel as pn
pn.extension()
```
The ``DateRangeSlider`` widget allows selecting a date range using a slider with two handles.
For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb).
#### Parameters:
For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
##### Core
* **``start``** (datetime): The range's lower bound
* **``end``** (datetime): The range's upper bound
* **``value``** (tuple): Tuple of upper and lower bounds of the selected range expressed as datetime types
* **``value_throttled``** (tuple): Tuple of upper and lower bounds of the selected range expressed as datetime types throttled until mouseup
##### Display
* **``bar_color``** (color): Color of the slider bar as a hexadecimal RGB value
* **``callback_policy``** (str, **DEPRECATED**): Policy to determine when slider events are triggered (one of 'continuous', 'throttle', 'mouseup')
* **``callback_throttle``** (int): Number of milliseconds to pause between callback calls as the slider is moved
* **``direction``** (str): Whether the slider should go from left to right ('ltr') or right to left ('rtl')
* **``disabled``** (boolean): Whether the widget is editable
* **``name``** (str): The title of the widget
* **``orientation``** (str): Whether the slider should be displayed in a 'horizontal' or 'vertical' orientation.
* **``tooltips``** (boolean): Whether to display tooltips on the slider handle
___
The slider start and end can be adjusted by dragging the handles and whole range can be shifted by dragging the selected range.
```
date_range_slider = pn.widgets.DateRangeSlider(
name='Date Range Slider',
start=dt.datetime(2017, 1, 1), end=dt.datetime(2019, 1, 1),
value=(dt.datetime(2017, 1, 1), dt.datetime(2018, 1, 10))
)
date_range_slider
```
``DateRangeSlider.value`` returns a tuple of datetime values that can be read out and set like other widgets:
```
date_range_slider.value
```
### Controls
The `DateRangeSlider` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
```
pn.Row(date_range_slider.controls(jslink=True), date_range_slider)
```
| github_jupyter |
### Imports
```
import pandas as pd
import os
import numpy as np
from category_encoders import TargetEncoder
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Lasso
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
```
### Load Data
```
root_dir = os.path.dirname(os.getcwd())
data_dir = os.path.join(root_dir, 'data')
full_raw_initial_dataset_path = os.path.join(data_dir, 'gx_merged_lags_months.csv')
volume_path = os.path.join(data_dir, 'gx_volume.csv')
train_path = os.path.join(data_dir, 'train_split.csv')
features_path = os.path.join(data_dir, 'features')
volume = pd.read_csv(volume_path, index_col=0)
train_ids = pd.read_csv(train_path)
full_raw_initial_dataset = pd.read_csv(full_raw_initial_dataset_path)
full_initial_dataset = full_raw_initial_dataset.loc[
full_raw_initial_dataset.test == 0,:].drop(columns = 'test').drop_duplicates()
full_initial_dataset
```
### Functions
```
def find_closest_volume(country, brand, month_num, length_serie, func):
ind = (volume.country == country) & (volume.brand == brand) & (volume.month_num <month_num)
volume_filter = volume.loc[ind, :]
volume_sorted = volume_filter.sort_values(by=['month_num'], ascending=False)
volume_sorted.reset_index(inplace=True, drop=True)
total_obs = len(volume_sorted)
total_to_select = length_serie if length_serie<=total_obs else total_obs
volumes_selected = volume_sorted.volume[:total_to_select].values
return func(volumes_selected)
```
### Create initial datasets
```
train = train_ids.merge(
full_initial_dataset,
'inner',
on=['country', 'brand']
)
#sanitiy checks
assert len(train.loc[:,['country', 'brand', 'month_num']].drop_duplicates()) == \
len(train), 'duplicated'
```
### Features
#### Add feature
$$vol_{-1}$$
* Name: volume_1
```
volume_at_1 = volume.loc[volume.month_num == -1, ['country', 'brand', 'volume']].\
drop_duplicates().\
rename(columns={'volume':'volume_1'})
full_with_volume_1 = full_initial_dataset.merge(
volume_at_1,
'left',
on=['country', 'brand']
)
assert len(full_initial_dataset) == len(full_with_volume_1), 'There are duplicated'
```
#### Add feature
$$log\Big(\frac{vol_{t} + 1}{vol_{-1}}\Big)$$
* Name: log_relative_volume
```
train_with_relative_volume = train.merge(
volume_at_1,
'left',
on=['country', 'brand']
)
train_with_relative_volume['log_relative_volume'] = np.log(
(train_with_relative_volume.volume+1)/(train_with_relative_volume.volume_1)
)
train_with_relative_volume.sort_values(by=['country', 'brand', 'month_num'], inplace=True)
train_with_relative_volume['lag_log_relative_volume'] = train_with_relative_volume.groupby(
['country', 'brand'])['log_relative_volume'].shift(1)
train_with_relative_volume['lag_log_relative_volume'] = np.where(
train_with_relative_volume.month_num == 0,
np.log((1+train_with_relative_volume.volume_1)/train_with_relative_volume.volume_1),
train_with_relative_volume.lag_log_relative_volume
)
features = train_with_relative_volume.drop(columns=['volume', 'log_relative_volume'])
target = train_with_relative_volume['log_relative_volume']
categorical_cols = ['country', 'brand', 'therapeutic_area', 'presentation', 'month_name']
te = TargetEncoder(cols=categorical_cols)
pipe = Pipeline([
("te", te),
("imp", SimpleImputer(strategy="mean")),
("sc", StandardScaler()),
("model", Lasso(alpha=0.001, max_iter=2000))
])
pipe.fit(features, target)
pipe[-1].coef_
def get_log_relative_volume(model, features):
features_copy = features.copy()
features_copy.sort_values(by=['country', 'brand', 'month_num'], inplace=True)
features_copy['log_relative_volume'] = float('-inf')
i=0
for index, row in features_copy.iterrows():
if(i%5000 == 0):
print('Iteration:', i)
country = row.country
brand = row.brand
month = row.month_num
if month==0:
row.at['lag_log_relative_volume'] = 0
else:
ind = (features_copy.brand == brand) &\
(features_copy.country == country) &\
(features_copy.month_num == month-1)
lag_log_relative_volume = features_copy.loc[ind, 'log_relative_volume']
row.at['lag_log_relative_volume'] = lag_log_relative_volume
df = row.to_frame().T.drop(columns=['log_relative_volume'])
pred_val = model.predict(df)
ind = (features_copy.brand == brand) & (features_copy.country == country) & (features_copy.month_num == month)
features_copy.loc[ind, 'log_relative_volume'] = pred_val[0]
i+=1
return features_copy
preds = get_log_relative_volume(
pipe,
full_with_volume_1.loc[:, features.columns[:-1]]
)
assert len(preds) == len(full_with_volume_1), 'Duplicated'
assert sum(preds['log_relative_volume'].isna()) == 0, 'Missing'
assert sum(preds['log_relative_volume'].isnull()) == 0, 'Missing'
features_df = preds.loc[
:,
['country', 'brand', 'month_num', 'volume_1', 'log_relative_volume']].drop_duplicates()
assert len(features_df) == len(features_df.loc[:, ['country', 'brand', 'month_num']]), 'Duplicates'
features_df.to_csv(os.path.join(features_path, 'feat_01.csv'), index=False)
```
## Add feature
$$log\Big(\frac{vol_{t} + 1}{vol_{t-1}+1}\Big)$$
* Name: relative_volume_previous
```
train_with_predicted_log_relative_volume = train.merge(
preds.loc[:, ['country', 'brand', 'month_num', 'volume_1', 'log_relative_volume']],
'inner',
on=['country', 'brand', 'month_num']
)
assert len(train_with_predicted_log_relative_volume) == len(train), 'Duplicated values'
volume_previous_month = train_with_predicted_log_relative_volume.copy()
volume_previous_month['previous_month'] = volume_previous_month.month_num - 1
volume_previous_month = volume_previous_month.merge(
volume.loc[: , ['country', 'brand', 'volume', 'month_num']].rename(
columns={'volume':'volume_lag_1', 'month_num':'previous_month'}
),
'left',
on=['country', 'brand', 'previous_month']
).merge(
volume.loc[volume.month_num == -2, ['country', 'brand', 'volume']].rename(
columns={'volume':'volume_2'}
),
'left',
on=['country', 'brand']
)
assert len(volume_previous_month) == len(train_with_predicted_log_relative_volume), 'Duplicated values'
assert sum(volume_previous_month.volume_lag_1.isna()) == 0, 'NA values'
assert sum(volume_previous_month.volume_lag_1.isnull()) == 0, 'NA values'
assert sum(volume_previous_month.volume_2.isna()) == 0, 'NA values'
assert sum(volume_previous_month.volume_2.isnull()) == 0, 'NA values'
volume_previous_month['log_relative_volume_previous'] = np.log(
(volume_previous_month.volume + 1)/(volume_previous_month.volume_lag_1 + 1)
)
volume_previous_month['log_relative_volume_1'] = np.log(
(volume_previous_month.volume_1 + 1)/(volume_previous_month.volume_2 + 1)
)
assert sum(volume_previous_month.log_relative_volume_previous.isna()) == 0, 'log_relative_volume_previous contains NA values'
assert sum(volume_previous_month.log_relative_volume_previous.isnull()) == 0, 'log_relative_volume_previous contains null values'
assert sum(volume_previous_month.log_relative_volume_previous == np.inf) == 0, 'log_relative_volume_previous contains inf values'
assert sum(volume_previous_month.log_relative_volume_previous == -np.inf) == 0, 'log_relative_volume_previous contains -inf values'
assert sum(volume_previous_month.log_relative_volume_1.isna()) == 0, 'relative_volume_1 contains NA values'
assert sum(volume_previous_month.log_relative_volume_1.isnull()) == 0, 'relative_volume_1 contains null values'
assert sum(volume_previous_month.log_relative_volume_1 == np.inf) == 0, 'relative_volume_1 contains inf values'
assert sum(volume_previous_month.log_relative_volume_1 == -np.inf) == 0, 'relative_volume_1 contains -inf values'
volume_previous_month['lag_log_relative_volume_previous'] = volume_previous_month.groupby(
['country', 'brand'])['log_relative_volume_previous'].shift(1)
volume_previous_month['lag_log_relative_volume_previous'] = np.where(
volume_previous_month.month_num == 0,
volume_previous_month.log_relative_volume_1,
volume_previous_month.lag_log_relative_volume_previous
)
volume_previous_month
cols = list(preds.columns) + ['lag_log_relative_volume_previous']
features = volume_previous_month.loc[:, cols]
target = volume_previous_month.log_relative_volume_previous
categorical_cols = ['country', 'brand', 'therapeutic_area', 'presentation', 'month_name']
te = TargetEncoder(cols=categorical_cols)
pipe2 = Pipeline([
("te", te),
("imp", SimpleImputer(strategy="mean")),
("sc", StandardScaler()),
("model", Lasso(alpha=0.001, max_iter=2000))
])
pipe2.fit(features, target)
def get_log_relative_volume_previous(model, features):
features_copy = features.copy()
features_copy.sort_values(by=['country', 'brand', 'month_num'], inplace=True)
features_copy['log_relative_volume_previous'] = float('-inf')
i=0
for index, row in features_copy.iterrows():
if(i%5000 == 0):
print('Iteration:', i)
country = row.country
brand = row.brand
month = row.month_num
if month == 0:
volume_1 = find_closest_volume(country, brand, 0, 1, np.mean)
volume_2 = find_closest_volume(country, brand, -1, 1, np.mean)
lag_log_relative_volume_previous = np.log((volume_1 + 1)/(volume_2+1))
else:
ind = (features_copy.country == country) &\
(features_copy.brand == brand) &\
(features_copy.month_num == month -1)
lag_log_relative_volume_previous = features_copy.loc[ind, 'log_relative_volume_previous']
row.at['lag_log_relative_volume_previous'] = lag_log_relative_volume_previous
df = row.to_frame().T.drop(columns=['log_relative_volume_previous'])
pred_val = model.predict(df)
ind = (features_copy.brand == brand) & (features_copy.country == country) & (features_copy.month_num == month)
features_copy.loc[ind, 'log_relative_volume_previous'] = pred_val[0]
i+=1
return features_copy
preds2 = get_log_relative_volume_previous(pipe2, preds)
preds2
features_df = preds2.loc[
:,
['country', 'brand', 'month_num', 'volume_1', 'log_relative_volume', 'log_relative_volume_previous']].drop_duplicates()
features_df.to_csv(os.path.join(features_path, 'feat_02.csv'), index=False)
```
| github_jupyter |
**Important**: Click on "*Kernel*" > "*Restart Kernel and Clear All Outputs*" *before* reading this chapter in [JupyterLab <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyterlab.readthedocs.io/en/stable/)
# An Introduction to Python and Programming
This course is a *thorough* introduction to programming in [Python <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://www.python.org/).
It teaches the concepts behind and the syntax of the core Python language as defined by the [Python Software Foundation <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://www.python.org/psf/) in the official [language reference <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/reference/index.html). Furthermore, it introduces commonly used functionalities from the [standard library <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/index.html) and popular third-party libraries like [numpy <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://www.numpy.org/), [pandas <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://pandas.pydata.org/), [matplotlib <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://matplotlib.org/), and others.
<img src="static/logo.png" width="15%" align="left">
## Prerequisites
This course is suitable for *total beginners*, and there are *no* formal prerequisites. The student only needs to have:
- a *solid* understanding of the **English language**,
- knowledge of **basic mathematics** from high school,
- the ability to **think conceptually** and **reason logically**, and
- the willingness to **invest around time and effort on this course**.
## Objective
The **main goal** of this introduction is to **prepare** the student **for further studies** in the "field" of **data science**.
These include but are not limited to topics such as:
- linear algebra
- statistics & econometrics
- data cleaning & wrangling
- data visualization
- data engineering (incl. SQL databases)
- data mining (incl. web scraping)
- feature generation, machine learning, & deep learning
- optimization & (meta-)heuristics
- algorithms & data structures
- quantitative finance (e.g., option valuation)
- quantitative marketing (e.g., customer segmentation)
- quantitative supply chain management (e.g., forecasting)
- management science & decision models
- backend/API/web development (to serve data products to clients)
### Why data science?
The term **[data science <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Data_science)** is rather vague and does *not* refer to an academic discipline. Instead, the term was popularized by the tech industry, who also coined non-meaningful job titles such as "[rockstar](https://www.quora.com/Why-are-engineers-called-rockstars-and-ninjas)" or "[ninja developers](https://www.quora.com/Why-are-engineers-called-rockstars-and-ninjas)." Most *serious* definitions describe the field as being **multi-disciplinary** *integrating* scientific methods, algorithms, and systems thinking to extract knowledge from structured and unstructured data, *and* also emphasize the importance of **[domain knowledge <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Domain_knowledge)**.
Recently, this integration aspect feeds back into the academic world. The [MIT](https://www.mit.edu/), for example, created the new [Stephen A. Schwarzman College of Computing](http://computing.mit.edu) for [artificial intelligence <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Artificial_intelligence) with a 1 billion dollar initial investment where students undergo a "bilingual" curriculum with half the classes in quantitative and method-centric fields - like the ones mentioned above - and the other half in domains such as biology, business, chemistry, politics, (art) history, or linguistics (cf., the [official Q&As](http://computing.mit.edu/faq/) or this [NYT article](https://www.nytimes.com/2018/10/15/technology/mit-college-artificial-intelligence.html)). Their strategists see a future where programming skills are just as naturally embedded into students' curricula as are nowadays subjects like calculus, statistics, or academic writing. Then, programming literacy is not just another "nice to have" skill but a prerequisite, or an enabler, to understanding more advanced topics in the actual domains studied. Top-notch researchers who use programming in their day-to-day lives could then teach students more efficiently in their "language."
## Installation
To "read" this book in the most meaningful way, a working installation of **Python 3.7** or higher is expected.
A popular and beginner-friendly way is to install the [Anaconda Distribution](https://www.anaconda.com/distribution/) that not only ships Python and the standard library but comes pre-packaged with a lot of third-party libraries from the so-called "scientific stack." Just go to the [download](https://www.anaconda.com/download/) page and install the latest version (i.e., *2019-10* with Python 3.7 at the time of this writing) for your operating system.
Then, among others, you will find an entry "Anaconda Navigator" in your start menu like below. Click on it.
<img src="static/anaconda_start_menu.png" width="30%">
A window opens showing you several applications that come with the Anaconda Distribution. Now, click on "JupyterLab."
<img src="static/anaconda_navigator.png" width="50%">
A new tab in your web browser opens with the website being "localhost" and some number (e.g., 8888). This is the [JupyterLab <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyterlab.readthedocs.io/en/stable/) application that is used to display and run [Jupyter notebooks <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html) as described below. On the left, you see the files and folders in your local user folder. This file browser works like any other. In the center, you have several options to launch (i.e., "create") new files.
<img src="static/jupyter_lab.png" width="50%">
## Jupyter Notebooks
The document you are viewing is a so-called [Jupyter notebook <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html), a file format introduced by the [Jupyter Project <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyter.org/).
"Jupyter" is an [acronym <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Acronym) derived from the names of the three major programming languages **[Julia](https://julialang.org/)**, **[Python <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://www.python.org)**, and **[R](https://www.r-project.org/)**, all of which play significant roles in the world of data science. The Jupyter Project's idea is to serve as an integrating platform such that different programming languages and software packages can be used together within the same project easily.
Furthermore, Jupyter notebooks have become a de-facto standard for communicating and exchanging results in the data science community - both in academia and business - and provide an alternative to terminal-based ways of running Python (e.g., the default [Python interpreter <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/tutorial/interpreter.html) as shown below or a more advanced interactive version like [IPython <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://ipython.org/)) or a full-fledged [Integrated Development Environment <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Integrated_development_environment) (e.g., the commercial [PyCharm](https://www.jetbrains.com/pycharm/) or the free [Spyder <img height="12" style="display: inline-block" src="static/link_to_gh.png">](https://github.com/spyder-ide/spyder) that comes with the Anaconda Distribution).
<img src="static/terminal.png" width="50%">
Jupyter notebooks allow mixing formatted English with Python code in the same document. Text is formatted with the [Markdown <img height="12" style="display: inline-block" src="static/link_to_gh.png">](https://guides.github.com/features/mastering-markdown/) language and mathematical formulas are typeset with [LaTeX](https://www.overleaf.com/learn/latex/Free_online_introduction_to_LaTeX_%28part_1%29). Moreover, we may include pictures, plots, and even videos. Because of these features, the notebooks developed for this book come in a self-contained "tutorial" style enabling students to learn and review the material on their own.
### Markdown vs. Code Cells
A Jupyter notebook consists of cells that have a type associated with them. So far, only cells of type "Markdown" have been used, which is the default way to present formatted text.
The cell below is an example of a "Code" cell containing a line of actual Python code: It merely outputs the text "Hello world" when executed. To edit an existing code cell, enter into it with a mouse click. You know that you are "in" a code cell when its frame is highlighted in blue.
Besides this **edit mode**, there is also a so-called **command mode** that you reach by hitting the "Escape" key *after* entering a code cell, which un-highlights the frame. Using the "Enter" and "Escape" keys, you can now switch between the two modes.
To *execute*, or "*run*," a code cell, hold the "Control" key and press "Enter." Note that you do *not* go to the subsequent cell. Alternatively, you can hold the "Shift" key and press "Enter," which executes the cell *and* places your focus on the subsequent cell.
Similarly, a Markdown cell is also in either edit or command mode. For example, double-click on the text you are reading: This puts you into edit mode. Now, you could change the formatting (e.g., make a word printed in *italics* or **bold** with single or double asterisks) and "execute" the cell to render the text as specified.
To change a cell's type, choose either "Code" or "Markdown" in the navigation bar at the top. Alternatively, you can hit either the "Y" or "M" key on your keyboard when in command mode to make the focused cell a code or markdown cell.
```
print("Hello world")
```
Sometimes, a code cell starts with an exclamation mark `!`. Then, the Jupyter notebook behaves as if the following command were typed directly into a terminal. The cell below asks `python` to show its version number and is *not* Python code but a command in the [Shell <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Shell_%28computing%29) language. The `!` is useful to execute short commands without leaving a Jupyter notebook.
```
!python --version
```
## Why Python?
### What is Python?
Here is a brief history of and some background on Python (cf., also this [TechRepublic article](https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/) for a more elaborate story):
- [Guido van Rossum <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Guido_van_Rossum) (Python’s **[Benevolent Dictator for Life <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)**) was bored during a week around Christmas 1989 and started Python as a hobby project "that would keep \[him\] occupied" for some days
- the idea was to create a **general-purpose** scripting **language** that would allow fast *prototyping* and would *run on every operating system*
- Python grew through the 90s as van Rossum promoted it via his "Computer Programming for Everybody" initiative that had the *goal to encourage a basic level of coding literacy* as an equal knowledge alongside English literacy and math skills
- to become more independent from its creator, the next major version **Python 2** - released in 2000 and still in heavy use as of today - was **open-source** from the get-go which attracted a *large and global community of programmers* that *contributed* their expertise and best practices in their free time to make Python even better
- **Python 3** resulted from a significant overhaul of the language in 2008 taking into account the *learnings from almost two decades*, streamlining the language, and getting ready for the age of **big data**
- the language is named after the sketch comedy group [Monty Python <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Monty_Python)
#### Summary
Python is a **general-purpose** programming **language** that allows for *fast development*, is *easy to read*, **open-source**, long-established, unifies the knowledge of *hundreds of thousands of experts* around the world, runs on basically every machine, and can handle the complexities of applications involving **big data**.
### Isn't C a lot faster?
While it is true that a language like C is a lot faster than Python when it comes to *pure* **computation time**, this does not matter in many cases as the *significantly shorter* **development cycles** are the more significant cost factor in a rapidly changing world.
### Who uses it?
<img src="static/logos.png" width="70%">
While ad-hominem arguments are usually not the best kind of reasoning, we briefly look at some examples of who uses Python and leave it up to the reader to decide if this is convincing or not:
- **[Massachusetts Institute of Technology](https://www.mit.edu/)**
- teaches Python in its [introductory course](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0001-introduction-to-computer-science-and-programming-in-python-fall-2016/) to computer science independent of the student's major
- replaced the infamous course on the [Scheme](https://groups.csail.mit.edu/mac/projects/scheme/) language (cf., [source <img height="12" style="display: inline-block" src="static/link_to_hn.png">](https://news.ycombinator.com/item?id=602307))
- **[Google](https://www.google.com/)**
- used the strategy "Python where we can, C++ where we must" from its early days on to stay flexible in a rapidly changing environment (cf., [source <img height="12" style="display: inline-block" src="static/link_to_so.png">](https://stackoverflow.com/questions/2560310/heavy-usage-of-python-at-google))
- the very first web-crawler was written in Java and so difficult to maintain that it was rewritten in Python right away (cf., [source](https://www.amazon.com/Plex-Google-Thinks-Works-Shapes/dp/1416596585/ref=sr_1_1?ie=UTF8&qid=1539101827&sr=8-1&keywords=in+the+plex))
- Guido van Rossom was hired by Google from 2005 to 2012 to advance the language there
- **[NASA](https://www.nasa.gov/)** open-sources many of its projects, often written in Python and regarding analyses with big data (cf., [source](https://code.nasa.gov/language/python/))
- **[Facebook](https://facebook.com/)** uses Python besides C++ and its legacy PHP (a language for building websites; the "cool kid" from the early 2000s)
- **[Instagram](https://instagram.com/)** operates the largest installation of the popular **web framework [Django](https://www.djangoproject.com/)** (cf., [source](https://instagram-engineering.com/web-service-efficiency-at-instagram-with-python-4976d078e366))
- **[Spotify](https://spotify.com/)** bases its data science on Python (cf., [source](https://labs.spotify.com/2013/03/20/how-we-use-python-at-spotify/))
- **[Netflix](https://netflix.com/)** also runs its predictive models on Python (cf., [source](https://medium.com/netflix-techblog/python-at-netflix-86b6028b3b3e))
- **[Dropbox](https://dropbox.com/)** "stole" Guido van Rossom from Google to help scale the platform (cf., [source](https://medium.com/dropbox-makers/guido-van-rossum-on-finding-his-way-e018e8b5f6b1))
- **[JPMorgan Chase](https://www.jpmorganchase.com/)** requires new employees to learn Python as part of the onboarding process starting with the 2018 intake (cf., [source](https://www.ft.com/content/4c17d6ce-c8b2-11e8-ba8f-ee390057b8c9?segmentId=a7371401-027d-d8bf-8a7f-2a746e767d56))
As images tell more than words, here are two plots of popular languages' "market shares" based on the number of questions asked on [Stack Overflow <img height="12" style="display: inline-block" src="static/link_to_so.png">](https://stackoverflow.blog/2017/09/06/incredible-growth-python/), the most relevant platform for answering programming-related questions: As of late 2017, Python surpassed [Java](https://www.java.com/en/), heavily used in big corporates, and [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript), the "language of the internet" that does everything in web browsers, in popularity. Two blog posts from "technical" people explain this in more depth to the layman: [Stack Overflow <img height="12" style="display: inline-block" src="static/link_to_so.png">](https://stackoverflow.blog/2017/09/14/python-growing-quickly/) and [DataCamp](https://www.datacamp.com/community/blog/python-scientific-computing-case).
<img src="static/growth_major_languages.png" width="50%">
As the graph below shows, neither Google's very own language **[Go](https://golang.org/)** nor **[R](https://www.r-project.org/)**, a domain-specific language in the niche of statistics, can compete with Python's year-to-year growth.
<img src="static/growth_smaller_languages.png" width="50%">
[IEEE Sprectrum](https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019) provides a more recent comparison of programming language's popularity. Even news and media outlets notice the recent popularity of Python: [Economist](https://www.economist.com/graphic-detail/2018/07/26/python-is-becoming-the-worlds-most-popular-coding-language), [Huffington Post](https://www.huffingtonpost.com/entry/why-python-is-the-best-programming-language-with-which_us_59ef8f62e4b04809c05011b9), [TechRepublic](https://www.techrepublic.com/article/why-python-is-so-popular-with-developers-3-reasons-the-language-has-exploded/), and [QZ](https://qz.com/1408660/the-rise-of-python-as-seen-through-a-decade-of-stack-overflow/).
## Contents
- *Chapter 0*: Introduction
- **Part A: Expressing Logic**
- *Chapter 1*: Elements of a Program
- *Chapter 2*: Functions & Modularization
- *Chapter 3*: Conditionals & Exceptions
- *Chapter 4*: Recursion & Looping
## How to learn Programming
Do you remember how you first learned to speak in your mother tongue? Probably not.
Your earliest memory as a child should probably be around the age of three or four years old when you could already say simple things and interact with your environment.
Although you did not know any grammar rules yet, other people just understood what you said. At least most of the time.
It is intuitively best to take the very mindset of a small child when learning a new language. And a programming language is no different from that.
This first chapter introduces simplistic examples and we accept them as they are *without* knowing any of the "grammar" rules yet. Then, we analyze them in parts and slowly build up our understanding.
Consequently, if parts of this chapter do not make sense right away, let's not worry too much. Besides introducing the basic elements, it also serves as an outlook for what is to come. So, many terms and concepts used here are deconstructed in great detail in the following chapters.
## Example: Averaging all even Numbers in a List
As our introductory example, we want to calculate the *average* of all *evens* in a **list** of whole numbers: `[7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]`.
While we are used to finding an [analytical solution](https://math.stackexchange.com/questions/935405/what-s-the-difference-between-analytical-and-numerical-approaches-to-problems/935446#935446) in math (i.e., derive some equation with "pen and paper"), we solve this task *programmatically* instead.
We start by creating a list called `numbers` that holds all the individual numbers between **brackets** `[` and `]`.
```
numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]
```
To verify that something happened in our computer's memory, we **reference** `numbers`.
```
numbers
```
So far, so good. Let's see how the desired **computation** could be expressed as a **sequence of instructions** in the next code cell.
Intuitively, the line `for number in numbers` describes a "loop" over all the numbers in the `numbers` list, one at a time.
The `if number % 2 == 0` may look confusing at first sight. Both `%` and `==` must have an unintuitive meaning here. Luckily, the **comment** in the same line after the `#` symbol has the answer: The program does something only for an even `number`.
In particular, it increases `count` by `1` and adds the current `number` onto the [running <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Running_total) `total`. Both `count` and `number` are **initialized** to `0` and the single `=` symbol reads as "... is *set* equal to ...". It cannot indicate a mathematical equation as, for example, `count` is generally *not* equal to `count + 1`.
Lastly, the `average` is calculated as the ratio of the final **values** of `total` and `count`. Overall, we divide the sum of all even numbers by their count: This is nothing but the definition of an average.
The lines of code "within" the `for` and `if` **statements** are **indented** and aligned with multiples of *four spaces*: This shows immediately how the lines relate to each other.
```
count = 0 # initialize variables to keep track of the
total = 0 # running total and the count of even numbers
for number in numbers:
if number % 2 == 0: # only work with even numbers
count = count + 1
total = total + number
average = total / count
```
We do not see any **output** yet but obtain the value of `average` by referencing it again.
```
average
```
## Output in a Jupyter Notebook
Only two of the previous four code cells generate an **output** while two remained "silent" (i.e., nothing appears below the cell after running it).
By default, Jupyter notebooks only show the value of the **expression** in the last line of a code cell. And, this output may also be suppressed by ending the last line with a semicolon `;`.
```
"Hello, World!"
"I am feeling great :-)"
"I am invisible!";
```
To see any output other than that, we use the built-in [print() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#print) **function**. Here, the parentheses `()` indicate that we **call** (i.e., "execute") code written somewhere else.
```
print("Hello, World!")
print("I am feeling great :-)")
```
Outside Jupyter notebooks, the semicolon `;` is used as a **separator** between statements that must otherwise be on a line on their own. However, it is *not* considered good practice to use it as it makes code less readable.
```
print("Hello, World!"); print("I am feeling great :-)")
```
### Jupyter Notebook Aspects
#### The Order of Code Cells is arbitrary
We can run the code cells in a Jupyter notebook in *any* arbitrary order.
That means, for example, that a variable defined towards the bottom could accidentally be referenced at the top of the notebook. This happens quickly when we iteratively built a program and go back and forth between cells.
As a good practice, it is recommended to click on "Kernel" > "Restart Kernel and Run All Cells" in the navigation bar once a notebook is finished. That restarts the Python process forgetting all **state** (i.e., all variables) and ensures that the notebook runs top to bottom without any errors the next time it is opened.
#### Notebooks are linear
While this book is built with Jupyter notebooks, it is crucial to understand that "real" programs are almost never "linear" (i.e., top to bottom) sequences of instructions but instead may take many different **flows of execution**.
At the same time, for a beginner's course, it is often easier to code linearly.
In real data science projects, one would probably employ a mixed approach and put reusable code into so-called Python modules (i.e., *.py* files; cf., Chapter 2) and then use Jupyter notebooks to build up a linear report or storyline for an analysis.
## How to learn Programming
### ABC Rule
**A**lways **b**e **c**oding.
Programming is more than just writing code into a text file. It means reading through parts of the [documentation <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/), blogs with best practices, and tutorials, or researching problems on [Stack Overflow <img height="12" style="display: inline-block" src="static/link_to_so.png">](https://stackoverflow.com/) while trying to implement features in the application at hand. Also, it means using command-line tools to automate some part of the work or manage different versions of a program, for example, with **[git](https://git-scm.com/)**. In short, programming involves a lot of "muscle memory," which can only be built and kept up through near-daily usage.
Further, many aspects of software architecture and best practices can only be understood after having implemented some requirements for the very first time. Coding also means "breaking" things to find out what makes them work in the first place.
Therefore, coding is learned best by just doing it for some time on a daily or at least a regular basis and not right before some task is due, just like learning a "real" language.
### The Maker's Schedule
[Y Combinator <img height="12" style="display: inline-block" src="static/link_to_hn.png">](https://www.ycombinator.com/) co-founder [Paul Graham <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Paul_Graham_%28programmer%29) wrote a very popular and often cited [article](http://www.paulgraham.com/makersschedule.html) where he divides every person into belonging to one of two groups:
- **Managers**: People that need to organize things and command others (e.g., a "boss" or manager). Their schedule is usually organized by the hour or even 30-minute intervals.
- **Makers**: People that create things (e.g., programmers, artists, or writers). Such people think in half days or full days.
Have you ever wondered why so many tech people work during nights and sleep at "weird" times? The reason is that many programming-related tasks require a "flow" state in one's mind that is hard to achieve when one can get interrupted, even if it is only for one short question. Graham describes that only knowing that one has an appointment in three hours can cause a programmer to not get into a flow state.
As a result, do not set aside a certain amount of time for learning something but rather plan in an *entire evening* or a *rainy Sunday* where you can work on a problem in an *open end* setting. And do not be surprised anymore to hear "I looked at it over the weekend" from a programmer.
### Phase Iteration
When being asked the above question, most programmers answer something that can be classified into one of two broader groups.
**1) Toy Problem, Case Study, or Prototype**: Pick some problem, break it down into smaller sub-problems, and solve them with an end in mind.
**2) Books, Video Tutorials, and Courses**: Research the best book, blog, video, or tutorial for something and work it through from start to end.
The truth is that you need to iterate between these two phases.
Building a prototype always reveals issues no book or tutorial can think of before. Data is never as clean as it should be. An algorithm from a textbook must be adapted to a peculiar aspect of a case study. It is essential to learn to "ship a product" because only then will one have looked at all the aspects.
The major downside of this approach is that one likely learns bad "patterns" overfitted to the case at hand, and one does not get the big picture or mental concepts behind a solution. This gap can be filled in by well-written books: For example, check the Python/programming books offered by [Packt](https://www.packtpub.com/packt/offers/free-learning/) or [O’Reilly](https://www.oreilly.com/).
## HackerRank
HackerRank is a wonderful online platform which contains numerous online coding tests for student to practice their coding skills. Software companies also use HackerRank technical assessment and remote interview solution for hiring developers. Student will see a coding prolem in HackerRank in a form of problem description, sample input and expected output.
<img src="static/HackerRankProblem.png" width="60%">
The task is writing the code according to problem description so that the code will take the sample input and print out the expected output.
<img src="static/HackerRank_submit.png" width="60%">
Our course target is completing some (may not all) [HackerRank Python problems](https://www.hackerrank.com/domains/python?filters%5Bsubdomains%5D%5B%5D=py-introduction). In order to do that, please register your account in HackerRank in the link below:
[HackerRank SignUp](
https://www.hackerrank.com/auth/signup?h_l=body_middle_left_button&h_r=sign_up)
## Comments
We use the `#` symbol to write comments in plain English right into the code.
As a good practice, comments should *not* describe *what* happens. This should be evident by reading the code. Otherwise, it is most likely badly written code. Rather, comments should describe *why* something happens.
Comments may be added either at the end of a line of code, by convention separated with two spaces, or on a line on their own.
```
distance = 891 # in meters
elapsed_time = 93 # in seconds
# Calculate the speed in km/h.
speed = 3.6 * distance / elapsed_time
```
But let's think wisely if we need to use a comment.
The second cell is a lot more Pythonic.
```
seconds = 365 * 24 * 60 * 60 # = seconds in the year
seconds_per_year = 365 * 24 * 60 * 60
```
## TL;DR
We end each chapter with a summary of the main points (i.e., **TL;DR** = "too long; didn't read").
- program
- **sequence** of **instructions** that specify how to perform a computation (= a "recipe")
- a "black box" that processes **inputs** and transforms them into meaningful **outputs** in a *deterministic* way
- conceptually similar to a mathematical function $f$ that maps some input $x$ to an output $y = f(x)$
- comments
- **prose** supporting a **human's understanding** of the program
- ignored by Python
## Further readings [Miller et al., 2013]:
#### Algorithms
Given a problem, a computer scientist’s goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are solutions.
We say that a problem is computable if an algorithm exists for solving it. An alternative
definition for computer science, then, is to say that computer science is the study of problems that are and that are not computable, the study of the existence and the nonexistence of algorithms.
In any case, you will note that the word “computer” did not come up at all. Solutions
are considered independent from the machine.
#### Abstraction
Computer science, as it pertains to the problem-solving process itself, is also the study of abstraction. Abstraction allows us to view the problem and solution in such a way as to separate the so-called logical and physical perspectives.
Consider the automobile that you may have driven to school or work today. As a driver, a user of the car, you have certain interactions that take place in order to utilize the car for its intended purpose. You get in, insert the key, start the car, shift, brake, accelerate, and steer in order to drive. From an abstraction point of view, we can say that you are seeing the logical perspective of the automobile. You are using the functions provided by the car designers for the purpose of transporting you from one location to another. These functions are sometimes also referred to as the interface.
On the other hand, the mechanic who must repair your automobile takes a very different point of view. She not only knows how to drive but must know all of the details necessary to carry out all the functions that we take for granted. She needs to understand how the engine works, how the transmission shifts gears, how temperature is controlled, and so on. This is known as the physical perspective, the details that take place “under the hood.”
The same thing happens when we use computers. Most people use computers to write documents, send and receive email, surf the web, play music, store images, and play games without any knowledge of the details that take place to allow those types of applications to work. They view computers from a logical or user perspective. Computer scientists, programmers, technology support staff, and system administrators take a very different view of the computer. They must know the details of how operating systems work, how network protocols are configured, and how to code various scripts that control function. They must be able to control the low-level details that a user simply assumes.
The common point for both of these examples is that the user of the abstraction, sometimes also called the client, does not need to know the details as long as the user is aware of the way the interface works. This interface is the way we as users communicate with the underlying complexities of the implementation.
<img src="static/abstraction.png" width="60%">
Python code:
```
import math
math.sqrt(16)
```
This is an example of procedural abstraction. We do not necessarily know how the square root is being calculated, but we know what the function is called and how to use it. If we perform the import correctly, we can assume that the function will provide us with the correct results.
We know that someone implemented a solution to the square root problem but we only
need to know how to use it. This is sometimes referred to as a “black box” view of a process. We simply describe the interface: the name of the function, what is needed (the parameters), and what will be returned. The details are hidden inside.
#### Why Study Algorithms?
Computer scientists learn by experience. We learn by seeing others solve problems and by solving problems by ourselves. Being exposed to different problem-solving techniques and seeing how different algorithms are designed helps us to take on the next challenging problem that we are given. By considering a number of different algorithms, we can begin to develop pattern recognition so that the next time a similar problem arises, we are better able to solve it.
Algorithms are often quite different from one another. Consider the example of sqrt seen earlier. It is entirely possible that there are many different ways to implement the details to compute the square root function. One algorithm may use many fewer resources than another. One algorithm might take 10 times as long to return the result as the other. We would like to have some way to compare these two solutions. Even though they both work, one is perhaps “better” than the other. We might suggest that one is more efficient or that one simply works faster or uses less memory. As we study algorithms, we can learn analysis techniques that allow us to compare and contrast solutions based solely on their own characteristics, not the characteristics of the program or computer used to implement them.
In the worst case scenario, we may have a problem that is intractable, meaning that there is no algorithm that can solve the problem in a realistic amount of time. It is important to be able to distinguish between those problems that have solutions, those that do not, and those where solutions exist but require too much time or other resources to work reasonably.
Python is a modern, easy-to-learn, object-oriented programming language. It has a powerful set of built-in data types and easy-to-use control constructs.
### References:
1. Miller, Brad, and David Ranum. "Problem solving with algorithms and data structures." (2013).
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
from tensorflow.keras import datasets
from pyvizml import CreateNBAData
import requests
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_boston
from sklearn.datasets import fetch_california_housing
from sklearn.datasets import make_classification
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
cnb = CreateNBAData(2019)
players = cnb.create_players_df()
X = players['heightMeters'].values.reshape(-1, 1)
y = players['weightKilograms'].values
cnb
players
X
y
X.shape
y.shape
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.33, random_state=42)
print(X_train.shape)
print(X_valid.shape)
print(y_train.shape)
print(y_valid.shape)
ss = StandardScaler()
lr = LinearRegression()
type(ss) # transformer
type(lr) # predictor
pipeline = Pipeline([('scaler', ss), ('lr', lr)])
type(pipeline)
pipeline.fit(X_train, y_train)
pipeline.predict(X_valid)
lr.coef_
lr.intercept_
X_valid[0, :]
ss.transform(np.array([2.03]).reshape(-1, 1)) * 8.53366403 + lr.intercept_
X = players[['heightFeet', 'heightInches']].values.astype(float)
X.shape
X
print(X.ndim)
print(X.shape)
y
print(y.ndim)
print(y.shape)
poly = PolynomialFeatures()
type(poly)
X_before_poly = X.copy()
X_before_poly.shape
X_after_poly = poly.fit_transform(X)
X_after_poly.shape
X_before_poly[:10, :]
X_after_poly[:10, :]
poly = PolynomialFeatures(degree=1)
X_after_poly = poly.fit_transform(X)
X_after_poly.shape
X_after_poly
poly = PolynomialFeatures(degree=3)
X_after_poly = poly.fit_transform(X)
X_after_poly.shape
X_after_poly
X_before_scaled = X.copy()
ms = MinMaxScaler()
ss = StandardScaler()
X_before_poly[:10, :]
max_val = X[:, 0].max() # 7
min_val = X[:, 0].min() # 5
(7-6)/(7-5)
X_after_ms = ms.fit_transform(X_before_scaled)
print(X_after_ms[:10])
mean_val = X[:, 0].max() # 7
std_val = X[:, 0].min() # 5
(7 - 6) / (7-5)
X_after_ss = ss.fit_transform(X_before_scaled)
print(X_after_ss[:10])
train = pd.read_csv("https://kaggle-getting-started.s3-ap-northeast-1.amazonaws.com/titanic/train.csv")
test = pd.read_csv("https://kaggle-getting-started.s3-ap-northeast-1.amazonaws.com/titanic/test.csv")
print(train.shape)
print(test.shape)
train.columns.difference(test.columns)
players.iloc[:5, :4]
players_train, players_valid = train_test_split(players, test_size=0.3, random_state=42)
print(players_train.shape)
print(players_valid.shape)
153 / (153+357)
players_train.iloc[:5, :4]
players_valid.iloc[:5, :4]
def trainTestSplit(df, test_size, random_state):
df_index = df.index.values.copy()
m = df_index.size
np.random.seed(random_state)
np.random.shuffle(df_index)
test_index = int(np.ceil(m * test_size))
test_indices = df_index[:test_index]
train_indices = df_index[test_index:]
df_valid = df.loc[test_indices, :]
df_train = df.loc[train_indices, :]
return df_train, df_valid
players_train, players_valid = trainTestSplit(players, test_size=0.3, random_state=42)
players_train.iloc[:5, :4]
players_valid.iloc[:5, :4]
```
| github_jupyter |
# MBZ-XML-TO-EXCEL
First pubished version May 22, 2019. This is version 0.0004 (revision July 26, 2019)
Licensed under the NCSA Open source license
Copyright (c) 2019 Lawrence Angrave
All rights reserved.
Developed by: Lawrence Angrave
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal with the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution.
Neither the names of Lawrence Angrave, University of Illinois nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE.
# Citations and acknowledgements welcomed!
In a presentation, report or paper please recognise and acknowledge the the use of this software.
Please contact angrave@illinois.edu for a Bibliography citation. For presentations, the following is sufficient
MBZ-XML-TO-EXCEL (https://github.com/angrave/Moodle-mbz-to-excel) by Lawrence Angrave.
MBZ-XML-TO-EXCEL is an iLearn project, supported by an Institute of Education Sciences Award R305A180211
If also using Geo-IP data, please cite IP2Location. For example,
"This report uses geo-ip location data from IP2Location.com"
# Known limitations and issues
The assessment sheet (generated from workshop.xml) may generate URLs that are longer than 255 characters,
the largested supported by Excel. These very long URLs will be excluded
No verification of the data has been performed.
It is unknown if the inferred timestamps based on the Unix Epoch timestamp require a timezone adjustment.
# Requirements
This project uses Python3, Jupiter notebooks and Pandas.
# Set up
```
#import xml.etree.ElementTree as ET
#lxml supports line numbers
import lxml.etree as ET
from collections import OrderedDict
import pandas as pd
import numpy as np
import re
import os
import urllib
import datetime
import glob
import tarfile
import tempfile
import base64
# geoip support
import bisect
import ipaddress
# timestamp support
from datetime import datetime
# Extract text from html messages
from bs4 import BeautifulSoup
import uuid
import traceback
import xlsxwriter
excelengine = 'xlsxwriter'
# 'xlsxwriter' is currently recommended though it did not improve the write speed using generic pandas interface)
# Todo Perhaps using workbook interface directly will be faster? (https://xlsxwriter.readthedocs.io/)
# io.excel.xlsx.writer' (default, allegedly slow),
# 'pyexcelerate' (untested)
```
# Load GeoIP data (optional)
```
def load_geoip_data(geoip_datadir):
global geoip_all_colnames, geoip_geo_columns,geoipv4_df,geoipv4_ipvalues
geoip_all_colnames = ['geoip_ipfrom'
,'geoip_ipto'
,'geoip_country_code'
,'geoip_country_name'
,'geoip_region_name'
,'geoip_city_name'
,'geoip_latitude'
,'geoip_longitude'
,'geoip_zip_code'
,'geoip_time_zone']
geoip_geo_columns = geoip_all_colnames[2:]
#geoip_datadir = 'geoip' #change to your local directory of where the downloaded zip has been unpacked
geoipv4_csv = os.path.join(geoip_datadir,'IP2LOCATION-LITE-DB11.CSV')
if os.path.exists(geoipv4_csv):
print("Reading geoip csv",geoipv4_csv)
geoipv4_df = pd.read_csv(geoipv4_csv, names= geoip_all_colnames)
geoipv4_ipvalues = geoipv4_df['geoip_ipfrom'].values
# bisect searching assumes geoipv4_ipvalues are in increasing order
else:
geoipv4_df = None
geoipv4_ipvalues = None
print("No GeoIP csv data at ",geoipv4_csv)
print("IP addresses will not be converted into geographic locations")
print("Free Geo-IP data can be downloaded from IP2LOCATION.com")
```
# Phase 1 - Extract XMLs from mbz file and create hundreds of Excel files
```
# Each file can generate a list of tables (dataframes)
# Recursively process each element.
# For each non-leaf element we build an ordered dictionary of key-value pairs and attach this to an array for the particular element name
# <foo id='1' j='a'> becomes data['foo'] = [ {'id':'1', j:'a'} ]
# The exception is for leaf elements (no-child elements) in the form e.g. <blah>123</blah>
# We treat these equivalently to attributes on the surrounding (parent) xml element
# <foo id='1'><blah>123</blah></foo> becomes data['foo'] = [ {'id':'1', 'blah':'123'} ]
# and no data['blah'] is created
AUTOMATIC_IMPLICIT_XML_COLUMNS = 4 #SOURCE_LINE,PARENT_SHEET,PARENT_INDEX
def process_element(data,dest_basedir, tablename_list, context, e):
#deprecated has_no_children = len(e.getchildren()) == 0
has_no_children = len(e) == 0
has_no_attribs = len(e.attrib.keys()) == 0
text = e.text
has_text = text is not None
if has_text:
text = text.strip()
has_text = len(text) > 0
# Is this a leaf element e.g. <blah>123</blah>
# For the datasets we care about, leaves should not be tables; we only want their value
ignore_attribs_on_leaves = True
# This could be refactored to return a dictionary, so multiple attributes can be attached to the parent
if has_no_children and (has_no_attribs or ignore_attribs_on_leaves):
if not has_no_attribs:
print()
print("Warning: Ignoring attributes on leaf element:" + e.tag+ ":"+ str(e.attrib))
print()
return [e.tag,e.text] # Early return, attach the value to the parent (using the tag as the attribute name)
table_name = e.tag
if table_name not in data:
tablename_list.append(table_name)
data[table_name] = []
key_value_pairs = OrderedDict()
key_value_pairs['SOURCE_LINE'] = e.sourceline
key_value_pairs['PARENT_SHEET'] = context[0]
key_value_pairs['PARENT_ROW_INDEX'] = context[1]
key_value_pairs['PARENT_ID'] = context[2]
#print(e.sourceline)
# For correctness child_context needs to be after this line and before recursion
data[table_name].append(key_value_pairs)
myid = ''
if 'id' in e.attrib:
myid = e.attrib['id']
child_context = [table_name, len(data[table_name])-1, myid] # Used above context[0] during recursive call
for key in sorted(e.attrib.keys()):
key_value_pairs[key] = e.attrib[key]
for child in e.iterchildren():
# Could refactor here to use dictionary to enable multiple key-values from a discarded leaf
key,value = process_element(data,dest_basedir, tablename_list, child_context, child)
if value:
if key in key_value_pairs:
key_value_pairs[key] += ',' + str(value)
else:
key_value_pairs[key] = str(value)
if has_text:
key_value_pairs['TEXT'] = e.text # If at least some non-whitespace text, then use original text
return [e.tag,None]
def tablename_to_sheetname(elided_sheetnames, tablename):
sheetname = tablename
# Future: There may be characters that are invalid. If so, remove them here..
#Excel sheetnames are limited to 31 characters.
max_excel_sheetname_length = 31
if len(sheetname) <= max_excel_sheetname_length:
return sheetname
sheetname = sheetname[0:5] + '...' + sheetname[-20:]
elided_sheetnames.append(sheetname)
if elided_sheetnames.count(sheetname)>1:
sheetname += str( elided_sheetnames.count(sheetname) + 1)
return sheetname
def decode_base64_to_latin1(encoded_val):
try:
return str(base64.b64decode(encoded_val) , 'latin-1')
except Exception as e:
traceback.print_exc()
print("Not base64 latin1?", e)
return '??Not-latin1 text'
def decode_geoip(ip):
try:
ip = ip.strip()
if not ip or geoipv4_df is None:
return pd.Series(None, index=geoip_geo_columns)
ipv4 = int(ipaddress.IPv4Address(ip))
index = bisect.bisect(geoipv4_ipvalues, ipv4) - 1
entry = geoipv4_df.iloc[index]
assert entry.geoip_ipfrom <= ipv4 and entry.geoip_ipto >= ipv4
return entry[2:] # [geoip_geo_columns] # Drop ip_from and ip_to
except Exception as e:
traceback.print_exc()
print("Bad ip?",ip, e)
return pd.Series(None, index=geoip_geo_columns)
def decode_unixtimestamp_to_UTC(seconds):
if seconds == '':
return ''
try:
return datetime.utcfromtimestamp(int(seconds)).strftime('%Y-%m-%d %H:%M:%S')
except Exception as e:
traceback.print_exc()
print("Bad unix timestamp?", seconds , e)
return ''
def decode_html_to_text(html):
if html is np.nan:
return ''
try:
soup = BeautifulSoup(html,"lxml")
return soup.get_text()
except Exception as e:
traceback.print_exc()
print('Bad html?',html, e)
return '???'
def validate_anonid_data(anonid_df):
#Expected columns
for c in ['anonid','userid']:
if c not in anonid_df.columns:
raise ('anonid_csv_file\'' + anonid_csv_file + '\'should have a column named '+c)
# No duplicate userid entries
check_for_duplicates = anonid_df['userid'].duplicated(keep=False)
if check_for_duplicates.any():
print(anonid_df[check_for_duplicates])
raise Exception('See above - fix the duplicates userid entries found in \'' + anonid_csv_file +'\'')
anonid_df['userid'] = anonid_df['userid'].astype(str)
def userid_to_anonid(userid):
global anonid_df, generate_missing_anonid
if userid is np.nan or len(userid) == 0:
return ''
row = anonid_df[ anonid_df['userid'] == userid ]
if len( row ) == 1:
return row['anonid'].values[0]
if generate_missing_anonid:
result = uuid.uuid4().hex
anonid_df = anonid_df.append({ 'userid':userid, 'anonid':result}, ignore_index=True)
else:
result = ''
return result
def to_dataframe(table_name, table_data):
df = pd.DataFrame(table_data)
# Moodle dumps use $@NULL@$ for nulls
df.replace('$@NULL@$','',inplace = True)
# We found two base64 encoded columns in Moodle data-
for col in df.columns & ['other','configdata']:
df[ str(col) + '_base64'] = df[str(col)].map(decode_base64_to_latin1)
for col in df.columns & ['timestart','timefinish','added','backup_date','original_course_startdate','original_course_enddate','timeadded','firstaccess','lastaccess','lastlogin','currentlogin','timecreated','timemodified','created','modified']:
df[ str(col) + '_utc'] = df[str(col)].map(decode_unixtimestamp_to_UTC)
# Extract text from html content
for col in df.columns & ['message', 'description','commenttext','intro','conclusion','summary','feedbacktext','content','feedback','info', 'questiontext' , 'answertext']:
df[ str(col) + '_text'] = df[str(col)].map(decode_html_to_text)
# Moodle data has 'ip' and 'lastip' that are ipv4 dotted
# Currently only ipv4 is implemented. geoipv4_df is None if the cvs file was not found
if geoipv4_df is None:
for col in df.columns & ['ip','lastip']:
df = df.join( df[str(col)].apply(decode_geoip) )
for col in df.columns & ['userid','relateduserid' , 'realuserid']:
col=str(col)
if col == 'userid':
out = 'anondid'
else:
out = col[0:-6] + '_anonid'
df[ out ] = df[col].map(userid_to_anonid)
if delete_userids:
df.drop(columns=[col],inplace=True)
if table_name == 'user':
df['anonid'] = df['id'].map(userid_to_anonid)
# Can add more MOODLE PROCESSING HERE :-)
return df
def to_absolute_file_url(filepath):
return urllib.parse.urljoin( 'file:', urllib.request.pathname2url(os.path.abspath(filepath)))
def write_excel_sheets(source_file, excelwriter, data, tablename_list):
elided_sheetnames = []
table_sheet_mapping = dict()
table_sheet_mapping[''] = '' # Top level parents have empty PARENT_SHEET
for tablename in tablename_list:
sheetname = tablename_to_sheetname(elided_sheetnames, tablename)
table_sheet_mapping[tablename] = sheetname
for tablename in tablename_list:
df = to_dataframe(tablename, data[tablename])
#Convert table (=original xml tag) into real sheet name (not tag name)
if 'PARENT_SHEET' in df.columns:
df['PARENT_SHEET'] = df['PARENT_SHEET'].apply(lambda x: table_sheet_mapping[x])
df.index.rename(tablename, inplace=True)
df.insert(0, 'SOURCE_FILE',source_file ,allow_duplicates=True)
df.insert(1, 'SOURCE_TAG', tablename, allow_duplicates=True)
sheetname = table_sheet_mapping[tablename]
if sheetname != tablename:
print("Writing "+ tablename + " as sheet "+ sheetname)
else:
print("Writing sheet "+ sheetname)
df.to_excel(excelwriter, sheet_name=sheetname, index_label=tablename)
return table_sheet_mapping
def re_adopt_child_table(data, parent_tablename, parent_table, child_tablename):
child_table = data[child_tablename]
for row in child_table:
if 'PARENT_SHEET' not in row.keys():
continue
if row['PARENT_SHEET'] == parent_tablename:
idx = row['PARENT_ROW_INDEX']
# Time to follow the pointer
parent_row = parent_table[idx]
#row['PARENT_TAG'] = parent_row['PARENT_TAG']
row['PARENT_ROW_INDEX'] = parent_row['PARENT_ROW_INDEX']
row['PARENT_ID'] = parent_row['PARENT_ID']
row['PARENT_SHEET'] = parent_row['PARENT_SHEET']
def discard_empty_tables(data,tablename_list):
nonempty_tables = []
for tablename in tablename_list:
table = data[tablename]
# print(tablename, len(table),'rows')
if len(table) == 0:
# print("Skipping empty table",tablename)
continue
include = False
for row in table:
if len(row) > AUTOMATIC_IMPLICIT_XML_COLUMNS: # Found more than just PARENT_TAG,... columns
include = True
break
if include:
# print("Including",tablename)
nonempty_tables.append(tablename)
else:
# print("Skipping unnecessary table",tablename)
# Will need to fixup child items that still think this is their container
# More efficient if we kept a mapping of child tables, rather than iterate over tables
for childname in tablename_list:
re_adopt_child_table(data, tablename, table, childname)
pass
return nonempty_tables
def process_one_file(dest_basedir, relative_sub_dir, xml_filename, dry_run):
print('process_one_file(\''+dest_basedir+'\',\''+relative_sub_dir+'\',\''+xml_filename+'\')')
#print("Reading XML " + xml_filename)
#Original parser
xmlroot = ET.parse(xml_filename).getroot()
# Use lxml
#xmlroot = etree.parse(xml_filename)
#print("Processing...")
data = dict()
tablename_list = []
initial_context = ['','',''] # Todo : Consider missing integer index e.g. ['',None,'']
process_element(data, dest_basedir ,tablename_list, initial_context, xmlroot)
nonempty_tables = discard_empty_tables(data,tablename_list)
if len(nonempty_tables) == 0:
#print("no tables left to write")
return
# We use underscore to collate source subdirectories
basename = os.path.basename(xml_filename).replace('.xml','').replace('_','')
use_sub_dirs = False
if use_sub_dirs:
output_dir = os.path.join(dest_basedir, relative_sub_dir)
if not os.path.exists(output_dir):
os.mkdirs(output_dir)
output_filename = os.path.join(output_dir, basename + '.xlsx')
else:
sub = relative_sub_dir.replace(os.sep,'_').replace('.','')
if (len(sub) > 0) and sub[-1] != '_':
sub = sub + '_'
output_filename = os.path.join(dest_basedir, sub + basename + '.xlsx')
if dry_run: # For debugging
return
print("** Writing ", output_filename)
if os.path.exists(output_filename):
os.remove(output_filename)
excelwriter = pd.ExcelWriter(output_filename, engine= excelengine)
# absolute path is useful to open original files on local machine
if(False):
source_file = to_absolute_file_url(xml_filename)
else:
source_file = os.path.normpath(xml_filename)
try:
write_excel_sheets(source_file, excelwriter, data,nonempty_tables)
excelwriter.close()
except Exception as ex:
traceback.print_exc()
print(type(ex))
print(ex)
pass
finally:
excelwriter = None
print()
def process_directory(xml_basedir, out_basedir, relative_sub_dir,toplevel_xml_only, dry_run):
xml_dir = os.path.join(xml_basedir, relative_sub_dir)
file_list = sorted(os.listdir(xml_dir))
for filename in file_list:
if filename.endswith('.xml'):
print("Processing", filename)
process_one_file(out_basedir, relative_sub_dir, os.path.join(xml_dir,filename), dry_run)
if toplevel_xml_only:
return # No recursion into subdirs(e.g. for testing)
# Recurse
for filename in file_list:
candidate_sub_dir = os.path.join(relative_sub_dir, filename)
if os.path.isdir( os.path.join(xml_basedir, candidate_sub_dir)) :
process_directory(xml_basedir, out_basedir, candidate_sub_dir,toplevel_xml_only, dry_run)
def extract_xml_files_in_tar(tar_file, extract_dir):
os.makedirs(extract_dir)
extract_count = 0
for tarinfo in tar_file:
if os.path.splitext(tarinfo.name)[1] == ".xml":
#print(extract_dir, tarinfo.name)
tar_file.extract( tarinfo, path = extract_dir)
extract_count = extract_count + 1
return extract_count
def archive_file_to_output_dir(archive_file):
return os.path.splitext(archive_file)[0] + '-out'
def archive_file_to_xml_dir(archive_file):
return os.path.splitext(archive_file)[0] + '-xml'
def lazy_extract_mbz(archive_source_file,expanded_archive_directory,skip_expanding_if_xml_files_found):
has_xml_files = len( glob.glob( os.path.join(expanded_archive_directory,'*.xml') ) ) > 0
if has_xml_files and skip_expanding_if_xml_files_found:
print("*** Reusing existing xml files in", expanded_archive_directory)
return
if os.path.isdir(expanded_archive_directory):
print("*** Deleting existing files in", expanded_archive_directory)
raise "Comment out this line if it is going to delete the correct directory"
shutil.rmtree(expanded_archive_directory)
with tarfile.open(archive_source_file, mode='r|*') as tf:
print("*** Expanding",archive_source_file, "to", expanded_archive_directory)
extract_count = extract_xml_files_in_tar(tf, expanded_archive_directory)
print('***',extract_count,' xml files extracted')
def process_xml_files(expanded_archive_directory,out_basedir,toplevel_xml_only,dry_run, anonid_output_csv):
global anonid_df
print("*** Source xml directory :", expanded_archive_directory)
print("*** Output directory:", out_basedir)
if not os.path.isdir(out_basedir):
os.makedirs(out_basedir)
process_directory(expanded_archive_directory, out_basedir,'.',toplevel_xml_only,dry_run)
if anonid_output_csv:
filepath = os.path.join(out_basedir,anonid_output_csv)
print("Writing ",filepath,len(anonid_df.index),'rows')
anonid_df.to_csv( filepath, index = None, header=True)
print("*** Finished processing XML")
```
# Phase 2 - Aggregate Excel documents
```
def list_xlsx_files_in_dir(xlsx_dir):
xlsx_files = sorted(glob.glob(os.path.join(xlsx_dir,'*.xlsx')))
xlsx_files = [file for file in xlsx_files if os.path.basename(file)[0] != '~' ]
return xlsx_files
# Phase 2 - Aggregate multiple xlsx that are split across multiple course sections into a single Excel file
def create_aggregate_sections_map(xlsx_dir):
xlsx_files = list_xlsx_files_in_dir(xlsx_dir)
sections_map = dict()
for source_file in xlsx_files:
path = source_file.split(os.path.sep) # TODO os.path.sep
nameparts = path[-1].split('_')
target = nameparts[:]
subnumber = None
if len(nameparts)>3 and nameparts[-3].isdigit(): subnumber = -3 # probably unnecessary as _ are removed from basename
if len(nameparts)>2 and nameparts[-2].isdigit(): subnumber = -2
if not subnumber: continue
target[subnumber] = 'ALLSECTIONS'
key = (os.path.sep.join(path[:-1])) + os.path.sep+ ( '_'.join(target))
if key not in sections_map.keys():
sections_map[key] = []
sections_map[key].append(source_file)
return sections_map
# Phase 3 - Aggregate over common objects
def create_aggregate_common_objects_map(xlsx_dir):
xlsx_files = list_xlsx_files_in_dir(xlsx_dir)
combined_map = dict()
# path/_activities_workshop_ALLSECTIONS_logstores.xlsx will map to key=logstores.xlsx
for source_file in xlsx_files:
path = source_file.split(os.path.sep) # TODO os.path.sep
nameparts = path[-1].split('_')
target = nameparts[-1]
if 'ALL_' == path[-1][:4]:
continue # Guard against restarts
key = (os.path.sep.join(path[:-1])) + os.path.sep+ ('ALL_' + target)
if key not in combined_map.keys():
combined_map[key] = []
combined_map[key].append(source_file)
return combined_map
def rebase_row(row,rebase_map):
if isinstance(row['PARENT_SHEET'] , str):
return str(int(row['PARENT_ROW_INDEX']) + int(rebase_map[ row['XLSX_SOURCEFILE'] + '#' + row['PARENT_SHEET'] ]))
else:
return ''
def check_no_open_Excel_documents_in_Excel(dir):
# Excel creates temporary backup files that start with tilde when an Excel file is open in Excel
if not os.path.isdir(dir):
return
open_files = glob.glob(os.path.join(dir,'~*.xlsx'))
if len(open_files):
print( 'Please close ' + '\n'.join(open_files) + '\nin directory\n'+dir)
raise IOError('Excel files '+('\n'.join(open_files))+' are currently open in Excel')
def aggregate_multiple_excel_files(source_filenames):
allsheets = OrderedDict()
rebase_map = {}
# !! Poor sort - it assumes the integers are the same char length. Todo improve so that filename_5_ < filename_10_
for filename in sorted(source_filenames):
print('Reading and aggregating sheets in' , filename)
xl = pd.ExcelFile(filename)
for sheet in xl.sheet_names:
df = xl.parse(sheet)
df['XLSX_SOURCEFILE'] = filename
if sheet not in allsheets.keys():
allsheets[sheet] = df
rebase_map[filename+'#'+sheet] = 0
else:
row_offset = len(allsheets[sheet])
rebase_map[filename+'#'+sheet] = row_offset # We will need this to rebase parent values
df[ df.columns[0] ] += row_offset
allsheets[sheet] = allsheets[sheet].append(df, ignore_index =True, sort = False)
xl.close()
# print('rebase_map',rebase_map)
# The row index of the parent no longer starts at zero
print('Rebasing parent index entries in all sheets')
for sheet in xl.sheet_names:
df = allsheets[sheet]
df['PARENT_ROW_INDEX'] = df.apply( lambda row: rebase_row( row,rebase_map), axis = 1)
df.drop('XLSX_SOURCEFILE', axis = 1, inplace = True)
return allsheets
def write_aggregated_model(output_filename, allsheets, dry_run):
print("Writing",output_filename)
if dry_run:
print("Dry run. Skipping ", allsheets.keys())
return
excelwriter = pd.ExcelWriter(output_filename, engine = excelengine)
try:
print("Writing Sheets ", allsheets.keys())
for sheetname,df in allsheets.items():
df.to_excel(excelwriter, sheet_name = sheetname, index = 'INDEX')
excelwriter.close()
except Exception as ex:
print(type(ex))
print(ex)
pass
finally:
excelwriter.close()
print('Writing finished\n')
def move_old_files(xlsx_dir, filemap, subdirname,dry_run):
xlsxpartsdir = os.path.join(xlsx_dir,subdirname)
if dry_run:
print('Dry run. Skipping move_old_files', filemap.items(),' to ', subdirname)
return
if not os.path.isdir(xlsxpartsdir):
os.mkdir(xlsxpartsdir)
for targetfile,sources in filemap.items():
for file in sources:
dest=os.path.join(xlsxpartsdir, os.path.basename(file))
print(dest)
os.rename(file, dest)
def aggreate_over_sections(xlsx_dir,dry_run):
sections_map= create_aggregate_sections_map(xlsx_dir)
for targetfile,sources in sections_map.items():
allsheets = aggregate_multiple_excel_files(sources)
write_aggregated_model(targetfile, allsheets, dry_run)
move_old_files(xlsx_dir, sections_map,'_EACH_SECTION_', dry_run)
def aggreate_over_common_objects(xlsx_dir,dry_run):
combined_map = create_aggregate_common_objects_map(xlsx_dir)
for targetfile,sources in combined_map.items():
allsheets = aggregate_multiple_excel_files(sources)
write_aggregated_model(targetfile, allsheets, dry_run)
move_old_files(xlsx_dir, combined_map, '_ALL_SECTIONS_', dry_run)
def create_column_metalist(xlsx_dir,dry_run):
xlsx_files = list_xlsx_files_in_dir(xlsx_dir)
metalist = []
for filename in xlsx_files:
print(filename)
xl = pd.ExcelFile(filename)
filename_local = os.path.basename(filename)
for sheet in xl.sheet_names:
df = xl.parse(sheet,nrows=1)
for column_name in df.columns:
metalist.append([filename_local,sheet,column_name])
xl.close()
meta_df = pd.DataFrame(metalist, columns=['file','sheet','column'])
meta_filename = os.path.join(xlsx_dir,'__All_COLUMNS.csv')
if dry_run:
print('Dry run. Skipping',meta_filename)
else:
meta_df.to_csv(meta_filename,sep='\t',index=False)
```
# Run
```
# Configuration / settings here
archive_source_file = None
expanded_archive_directory = None
skip_expanding_if_xml_files_found = True
output_directory = None
generate_missing_anonid = True
geoip_datadir = None
anonid_csv_file = None
# A simple csv file with header 'userid','anonid'
anonid_output_filename='userids_anonids.csv' # None if mapping should not be written
delete_userids = False # User table will still have an 'id' column
#relateduserids,realuserid andu userid columns in other tables are dropped
# Internal testing options
toplevel_xml_only = False # Don't process subdirectories. Occasionally useful for internal testing
dry_run = False # Don't write Excel files. Occasionally useful for internal testing
# Override the above here with the path to your mbz file (or expanded contents)
archive_source_file = os.path.join('..','example.mbz')
# ... or use expanded_archive_directory to point to an mbz file that has already been expanded into XML files
anonid_csv_file = None # os.path.join('..', 'example-userid-to-anonid.csv')
generate_missing_anonid = True
delete_userids = True
geoip_datadir= './geoip'
# Some typical numbers:
# A 400 student 15 week course with 16 sections
# Created a 4GB mbz which expanded to 367 MB of xml. (the non-xml files were not extracted)
# 30 total minutes processing time: 15 minutes to process xml,
# 6 minutes for each aggegration step, 2 minutes for the column summary
# Final output: 60MB of 'ALL_' Excel 29 files (largest: ALL_quiz.xlsx 35MB, ALL_logstores 10MB, ALL_forum 5MB)
# The initial section output (moved to _EACH_SECTION_/) has 334 xlsx files,
# which is futher reduced (see _ALL_SECTIONS_ ) 67 files.
if not archive_source_file and not expanded_archive_directory:
raise ValueError('Nothing to do: No mbz archive file or archive directory (with .xml files) specified')
if archive_source_file and not os.path.isfile(archive_source_file) :
raise ValueError('archive_source_file (' + os.path.abspath(archive_source_file) + ") does not refer to an existing archive")
if not expanded_archive_directory:
expanded_archive_directory = archive_file_to_xml_dir(archive_source_file)
if not output_directory:
if archive_source_file:
output_directory = archive_file_to_output_dir(archive_source_file)
else:
raise ValueError('Please specify output_directory')
if anonid_csv_file:
print ('Using ' + anonid_csv_file + ' mapping')
anonid_df = pd.read_csv(anonid_csv_file)
validate_anonid_data(anonid_df)
else:
anonid_df = pd.DataFrame([{'userid':'-1','anonid':'example1234'}])
start_time = datetime.now()
print(start_time)
if(geoip_datadir and 'geoipv4_df' not in globals()):
load_geoip_data(geoip_datadir)
if archive_source_file:
lazy_extract_mbz(archive_source_file,expanded_archive_directory,skip_expanding_if_xml_files_found)
check_no_open_Excel_documents_in_Excel(output_directory)
# Now the actual processing can begin
process_xml_files(expanded_archive_directory,output_directory, toplevel_xml_only, dry_run, anonid_output_filename)
# At this point we have 100s of Excel documents (one per xml file), each with several sheets (~ one per xml tag)!
# We can aggregate over all of the course sections
aggreate_over_sections(output_directory, dry_run)
# Workshops, assignments etc have a similar structure, so we also aggregate over similar top-level objects
aggreate_over_common_objects(output_directory, dry_run)
create_column_metalist(output_directory, dry_run)
end_time = datetime.now()
print(end_time)
print(end_time-start_time)
```
| github_jupyter |
# **DBSCAN**
## **Implementacion**
```
import numpy as np
import matplotlib.pyplot as plt
from math import e, inf
from random import randint, uniform
from sklearn.datasets import make_circles
```
### KNN
```
class Node:
def __init__(self, parent, x, area):
self.parent = parent
self.x = x
self.childs = [None, None] # [left_child, right_child]
# El area es un vector 2*len(x)-dimensional y representa un hipercubo, donde cada
# par de elementos representan los valores minimos y maximos de una determinada coordenada.
# Por ejemplo, si len(x) == 2, entonces area = [a, b, c, d] representa el cuadrado:
# a <= x[0] <= b; c <= x[1] <= d
self.area = area
class KNN:
def __init__(self, X):
self.X = X
def d(self, x, y):
""" Distancia euclidiana entre dos vectores. """
return np.linalg.norm(x-y)
def build_kd_tree(self, X=None, parent=None, right=True, d=0, root=True, area=None):
""" Construimos un KD-Tree.
INPUT:
X: Conjunto de datos del nodo actual.
parent: Nodo padre del nodo actual.
right: Indica si el nodo actual es el hijo derecho.
d: Atributo que se usara para realizar la division binaria de los datos.
root: Indica si el nodo actual es la raiz de todo el arbol.
area: Area que representa el nodo actual.
"""
# Si el nodo es la raiz, entonces tomamos todos los datos y el area es todo el espacio.
if root:
X = self.X
area = [-inf,inf]*len(X[0])
# Si no hay elementos, no se crea ningun nodo
if len(X) == 0: return
# Si solo hay un elemento, creamos un nodo con ese unico elemento.
elif len(X) == 1:
node = Node(parent, X[0], area)
# Verificamos que el nodo no sea la raiz, lo que significaria que solo hay un dato.
if not root: parent.childs[int(right)] = node
# Si hay mas de un dato.
else:
# Ordenamos los elementos segun el d-esimo atributo.
X_c = X.copy()
X_c.sort(key = lambda x: x[d])
# Obtenemos la mediana.
m = int(len(X_c)/2)
x_m = X_c[m]
# Creamos un nuevo nodo donde se almacenara la mediana.
node = Node(parent, x_m, area)
if not root: parent.childs[int(right)] = node
else: self.kd_tree = node
# Llamamos recursivamente la funcion para los hijos izquierdo y derecho.
# Derecho
X_r = X_c[m+1:].copy()
area_r = area.copy()
area_r[2*d] = x_m[d]
# Izquierdo
X_l = X_c[:m].copy()
area_l = area.copy()
area_l[2*d+1] = x_m[d]
# Llamada recursiva
self.build_kd_tree(X_l, node, False, (d+1)%len(x_m), False, area_l)
self.build_kd_tree(X_r, node, True, (d+1)%len(x_m), False, area_r)
def radius_neighbors(self, x, r):
# Aqui almacenamos los vecinos
self.neighbors = []
self.r_neighbors(x, self.kd_tree, 0, r)
neighbors = self.neighbors
# Nos aseguramos de eliminar estos atributos.
self.neighbors = None
return neighbors
def r_neighbors(self, x, node, d, r):
# Verificamos si el punto se encuentra fuera del hipercubo definido por el nodo actual.
if not all(node.area[2*i] <= x[i] <= node.area[2*i+1] for i in range(len(x))):
# Por cada dimension, verificamos si el punto se encuentra dentro de los lados
# correspondientes al hipercubo
p = []
for i in range(len(x)):
# Si no es asi, almacenamos la coordenada del punto que se encuentra fuera del
# lado del hipercubo.
if node.area[2*i] > x[i]: p.append(node.area[2*i])
elif x[i] > node.area[2*i+1]: p.append(node.area[2*i+1])
else: p.append(x[i])
# Calculamos la distancia entre las coordenadas del punto fuera del hipercubo y
# la interseccion de los lados correspondientes. Si es mayor al radio, no necesitamos
# verificar mas esta rama.
dist = self.d(np.array(p), x)
if dist > r: return
# Calculamos la distancia entre el punto y la raiz actual. Verificamos si es menor
# que el raio
dist = self.d(x, node.x)
if dist < r: self.neighbors.append(node.x)
# Llamamos primero a la subdivision del arbol tal que el punto cumpla la condicion,
# con la esperanza de que al llamar el segundo hijo, este pueda ser descartado facilmente.
# Si no cumple ninguna, se recorre primero el hijo izquierdo (si no es nulo) y luego el derecho.
if x[d] <= node.area[2*d+1] and node.childs[0] != None:
self.r_neighbors(x, node.childs[0], (d+1)%len(x), r)
if node.childs[1] != None:
self.r_neighbors(x, node.childs[1], (d+1)%len(x), r)
elif x[d] >= node.area[2*d] and node.childs[1] != None:
self.r_neighbors(x, node.childs[1], (d+1)%len(x), r)
if node.childs[0] != None:
self.r_neighbors(x, node.childs[0], (d+1)%len(x), r)
elif node.childs[0] != None:
self.r_neighbors(x, node.childs[0], (d+1)%len(x), r)
if node.childs[1] != None:
self.r_neighbors(x, node.childs[1], (d+1)%len(x), r)
elif node.childs[1] != None:
self.r_neighbors(x, node.childs[1], (d+1)%len(x), r)
```
### **DBScan**
```
class DBS:
def __init__(self, X):
self.X = X
# Usaremos el KNN para obtener los vecinos
self.knn = KNN(X)
self.knn.build_kd_tree()
def d(self, x, y):
""" Distancia Euclidiana. """
return np.linalg.norm(x-y)
def query(self, x, dist):
""" Calculamos los vecinos de un elemento dada una distancia minima. """
return self.knn.radius_neighbors(x, dist)
def clustering(self, dist, min_x):
""" Agrupamos los datos usando el metodo de DBScan. """
# Contador de clusters.
C = -1
# Diccionario label[x] -> C tal que x in C
labels = {tuple(x) : None for x in self.X}
for x in self.X:
# Si el elemento ya fue etiquetado, pasamos al siguiente.
if labels[tuple(x)] != None: continue
neighbors = self.query(x, dist)
# Si el elemento no tiene suficientes vecinos, es un dato atipico.
if len(neighbors) < min_x:
labels[tuple(x)] = -1
continue
# Pasamos a un nuevo cluster, etiquetamos el elemento con el cluster actual.
C += 1
labels[tuple(x)] = C
# Sacamos al elemento de sus propios vecinos y creamos el conjunto semilla.
for i in range(len(neighbors)):
if np.equal(neighbors[i], x).all():
neighbors.pop(i)
break
seed_set = neighbors.copy()
for s in seed_set:
# Si el elemento fue consierado atipico, ahora sera etiquetado con el
# cluster actual.
if labels[tuple(s)] == -1: labels[tuple(s)] = C
# Si ya tiene etiqueta, pasamos al siguiente elemento.
if labels[tuple(s)] != None: continue
# Etiquetamos al elemento con el cluster actual.
labels[tuple(s)] = C
# Calculamos los vecinos del elemento.
neighbors = self.query(s, dist)
# Si el elemento tiene suficientes vecinos.
if len(neighbors) >= min_x:
# Unimos los conjuntos "neighbors" y "seed_set"
for n in neighbors:
if not any(np.equal(n, ss).all() for ss in seed_set): seed_set.append(n)
return labels
```
## **Lectura de Datos**
```
nb_samples = 150
X0 = np.expand_dims(np.linspace(-2 * np.pi, 2 * np.pi, nb_samples), axis=1)
Y0 = -5 - np.cos(2.0 * X0) + np.random.uniform(0.0, 2.0, size=(nb_samples, 1))
X1 = np.expand_dims(np.linspace(-2 * np.pi, 2 * np.pi, nb_samples), axis=1)
Y1 = 3.5 - np.cos(2.0 * X0) + np.random.uniform(0.0, 2.0, size=(nb_samples, 1))
data_0 = np.concatenate([X0, Y0], axis=1)
data_1 = np.concatenate([X1, Y1], axis=1)
data = np.concatenate([data_0, data_1], axis=0)
data = [d for d in data]
for c in make_circles(30)[0]: data.append(c)
plt.plot([d[0] for d in data], [d[1] for d in data], 'o')
plt.show()
```
## **Resultados**
```
dbs = DBS(data)
labels = dbs.clustering(1.5, 5)
clusters = [[] for _ in range(max(labels.values())+2)]
for x in labels:
clusters[labels[tuple(x)]].append(x)
for c in clusters:
plt.plot([x[0] for x in c], [x[1] for x in c], 'o')
plt.show()
```
| github_jupyter |
# HOMEWORK 2 - ADM
```
import pandas as pd
import matplotlib.pyplot as plt
import methods
import datetime
```
## READ THE DATA
```
df_names = ["./datasets/2019-Nov.csv", "./datasets/2019-Oct.csv"]
```
## UNDERSTAND THE DATA
The data that we handle for this homework come from an online store. We are going to analyze two months: October and November. For each month we have different features that we have described below:
```
methods.describe_df(df_names)
```
We notice that there are some null values, but these are only inside two specific columns: *category_code* and *brand*. Since, for each question, we decide which columns we needed to read to answer it, we decided to handle null values inside a specific question only in the cases where they were actually relevant. (If we do so it will be specified in the *methods.py* file)
# QUESTION 1
## Which is the rate of complete funnels?
First of all we read the DataFrame, importing only the columns that we need. In this particular case, for the subquestions 1.1-1.2-1.3 we just need *user_session*,*event_type* and *product_id*
```
df1 = methods.loadAllDatasets(df_names, ['user_session','event_type','product_id'])
```
While, for the subquestions 1.4 and 1.5 we need *user_session*,*event_type*, *product_id* and *event_time*. We also need to parse the last column so that it will be recognized as a date and not as a string
```
df2 = methods.loadAllDatasetsWithParser(df_names)
```
**RQ1.1** What’s the operation users repeat more on average within a session? Produce a plot that shows the average number of times users perform each operation (view/removefromchart etc etc)
```
methods.avg_operations_performed(df1)
```
As we can notice in the graph above, in average, users tend to view products a lot, but, only in few occasions, the put them in the cart. Also, after one item is inside a cart is not garanteed to be purchased, in fact that operation has a lower average.
The event type *remove from cart* has an average of 0 beacuse in the months of October and November never happened.
<p> </p>
**RQ1.2** How many times, on average, a user views a product before adding it to the cart?
```
avg = methods.avg_views_before_cart(df1)
print(f"In average, a user views a product {avg} times before adding it to the cart.")
```
<p> </p>
**RQ1.3** What’s the probability that products added once to the cart are effectively bought?
```
avg = methods.avg_purchase_after_cart(df1)
print(f"The probability that products added once to the cart are effectively bought is: {avg}")
```
<p> </p>
**RQ1.4** What’s the average time an item stays in the cart before being removed?
```
methods.avg_time_cart_before_removal(df2)
```
**!!** This function can not be runned if we take only into account October and November because, in these months the event remove_from_cart does not exists as we can see below:
```
df2.event_type.unique()
```
If we try and run it we will get an error since we are performing operations (in particular divisions) with an empty series.
<p> </p>
**RQ1.5** How much time passes on average between the first view time and a purchase/addition to cart?
There may be different interpretation for this question: we decided to calculate the time that passes between the first view and the action add to cart / purchase for a same product id **inside** the same user_session
First we want to know the average time between the events *view* and *cart* for the same product:
```
avg = fn.avg_time_between_view_and_cart(df2)
print(f"The average time that passes between the first time that an item is viewed and the moment in which that item is added to the cart is approximately: {round(avg,2)} mins")
```
<p> </p>
Then we want to know the average time between the events *view* and *purchase* for the same product:
```
avg = fn.avg_time_between_view_and_purchase(df2)
print(f"The average time that passes between the first time that an item is viewed and the moment in which that item is purchased is approximately: {round(avg,2)} mins")
```
<p> </p>
So, to answer the main question: we noticed that users tend to view a product an average of 1.874 times before adding it to the cart. Once an item is added to the cart, it is not garanteed to be purchased, in fact only with probability 0.38 this happens. The rest of the times the item just stays in the cart and it could expire (removed automatically) or is removed from it manually by the user.
In conclusion the rate of complete funnels, which is given by completed operations (from view to purchase) over number of total oprations is low.
# QUESTION 2
**RQ2.1** What are the categories of the most trending products overall? For each month visualize this information through a plot showing the number of sold products per category.
Before executing our functions we import our datasets and select only the useful columns: ***'category_code', 'event_type', 'product_id'***.
This operation is made because the csv files are heavy to load and use. Also because we will use the same dataset with the <ins>same columns</ins> for all the subquestions
```
#import november dataset
ndt_selection = methods.loadOneDataset(df_names[0], ['category_code', 'event_type', 'product_id'])
```
We imported also the october dataset
```
#import october dataset
odt_selection = methods.loadOneDataset(df_names[1], ['category_code', 'event_type', 'product_id'])
```
<p> </p>
To obtain our answer we restricted the dataset to the "event_type" column, in our case equal to "purchase".
# ***NOVEMBER***
```
#plot the categories of the most trending products overall
ndt_select = methods.restrict_dt(ndt_selection, "purchase", 10)
```
Ww can now show the plot that we want for this question.
```
methods.plot_n_categories(ndt_select)
```
We see that the category **electronics** is the most sold.
# ***OCTOBER***
```
#plot the categories of the most trending products overall
odt_select = methods.restrict_dt(odt_selection, "purchase", 10)
```
Show the plot that we want for this question.
```
methods.plot_n_categories(odt_select)
```
We see that the category **electronics** is the most sold.
<p> </p>
**RQ2.2** Plot the most visited subcategories.
In the same way, we display the subcategories and do the same thing that we saw in the last question.
# ***NOVEMBER***
```
#plot the most visited 30 subcategories
sub_ndf = methods.dt_subcategories(ndt_selection)
```
Show the plot that we want for this question.
```
methods.plot_n_subcategories(sub_ndf, 30)
```
In this plot we see that the **smartphone** subcategory is the most sold in November.
<p> </p>
# ***OCTOBER***
```
#plot the most visited 30 subcategories
sub_odf = methods.dt_subcategories(odt_selection)
```
Show the plot that we want for this question.
```
methods.plot_n_subcategories(sub_odf, 30)
```
In this plot we see that the **smartphone** subcategory is the most sold in October.
<p> </p>
**RQ2.3** What are the 10 most sold products per category?
# ***NOVEMBER***
For each category we want to print the 10 most sold products, but in order to have a good visualization, we decided to print only the results for the first five categories.
```
#the 10 most sold products per category
methods.most_sold_products_per_category(sub_ndf, 10)
```
# ***OCTOBER***
For each category we want to print the 10 most sold products, but in order to have a good visualization, we decided to print only the results for the first five categories, as we did for November.
```
#the 10 most sold products per category
methods.most_sold_products_per_category(sub_odf, 10)
```
# QUESTION 3
## For each category, what’s the brand whose prices are higher on average?
To answer these questions we only need the columns 'category_code','brand','price'. For this reason we decided to read only these 3 when uploading the dataframes
```
df3 = methods.loadAllDatasets(df_names, ['category_code','price','brand'])
```
<p> </p>
**RQ3.1** Write a function that asks the user a category in input and returns a plot indicating the average price of the products sold by the brand.
First thing first, we apply a function that chooses randomly one category_code from all the ones available
```
category = methods.choose_category(df3)
category
```
Now that we have our category, we call a function that shows the average price of the products sold by each brand inside it
```
methods.avg_price(category, df3)
```
So, for example, if we select the category code **apparel.shirt**, there are several brands that offers it. Among all of those, the one that, in average, offers products with a higher price, is **weekend**
<p> </p>
**RQ3.2** Find, for each category, the brand with the highest average price. Return all the results in ascending order by price.
```
methods.highest_avg_price(df3)
```
The Data Frame in output has 3 different columns:
- **0**: is representing the **category_code**
- **1**: is representing the **brand** with the highest average price in that specific category code
- **2**: is representing the **price** associated with that specific brand
So, for example, inside the category *accessories.umbrellas*, we can see that the highest price in averege is offered by *hoco* (25,71$), and this category is the fifth cheapest af all.
The results are also sorted so that the prices are in order from the smallest to the largest value.
<p> </p>
# QUESTION 4
***How much does each brand earn per month? Write a function that given the name of a brand in input returns, for each month, its profit.***
Before starting with the homework request, we pick a brand name by input for showing its profit for each month
```
#which brand do you want to search.. randomize the choice!
brand_to_search = input("Choose a brand to see its profit: ")
```
Before executing our functions we import our datasets and we select only the useful columns: ***'event_type', 'brand', 'price'***.
This operation is made because the csv files are heavy to load and use. Also because we will use the same dataset with the <ins>same columns</ins> for all the subquestions
```
#import november dataset
ndt_selection = methods.loadOneDataset(df_names[0], ['event_type', 'brand', 'price'])
```
We imported also the october dataset
```
#import october dataset
odt_selection = methods.loadOneDataset(df_names[1], ['event_type', 'brand', 'price'])
```
To obtain our answer we restricted the dataset to the "event_type" column, in our case equal to "purchase" compare also with the brand_to_search.
# ***NOVEMBER***
```
#how much does each brand earn per month?
new_ndt_sum = methods.restrict_bypurchase_brand(ndt_selection, brand_to_search)
```
How much is the profit of your brand?
```
print(f"The {brand_to_search} has a profit of: {new_ndt_sum}$")
```
# ***OCTOBER***
```
#how much does each brand earn per month?
new_oct_sum = methods.restrict_bypurchase_brand(odt_selection, brand_to_search)
```
How much is the profit of your brand?
```
print(f"The {brand_to_search} has a profit of: {new_oct_sum}$")
```
***Is the average price of products of different brands significantly different?*** We will see...
# ***NOVEMBER***
```
#see the average for each brand, is significant?
new_ndt_mean = methods.restrict_bypurchase_brand_avg(ndt_selection)
new_ndt_mean
```
# ***OCTOBER***
```
#see the average for each brand, is significant?
new_odt_mean = methods.restrict_bypurchase_brand_avg(odt_selection)
new_odt_mean
```
<span style="color:red"> ***We can say that the brand name is significant for the product type (it's obvious).*** </span>
<p> </p>
**RQ4.1** Using the function you just created, find the top 3 brands that have suffered the biggest losses in earnings between one month and the next, specifing both the loss percentage and the 2 months (e.g., brand_1 lost 20% between march and april).
```
biggest_lose = methods.big_lose(ndt_selection, odt_selection)
```
Now we plot the top 3 brands that have suffered the biggest losses in earnings between october and november.
```
methods.summarize(biggest_lose)
```
<p> </p>
# QUESTION 5
## In what part of the day is your store most visited?
To answer the question, we only need to import two columns: *event_type* and *even_time*. We also need to parse the dates
```
df5 = methods.import_dataset5(df_names)
```
Now we can start analyzing our data. We can create different plots that show the most visited time of the day and the most visited day of the week
```
methods.most_visited_time(df5)
```
In this first plot we notice that usually people are more active it the afternoon, in fact the three most visited times of the day are the ones from 3PM to 5PM
```
methods.most_visited_day(df5)
```
In this second analysis we noticed that over all the days of the week **Friday** and **Saturday** are the one with more views
*Notice that on the x-axis the day of the week are indicated by numbers from 0 to 6; where 0 corresponds to Monday and 6 to Sunday*.
<p> </p>
Now we can 'merge' the two results and see, for each day of the week the average views of our online store splitted by hour.
**RQ5.1** Create a plot that for each day of the week shows the hourly average of visitors your store has.
```
methods.avg_visitors_perday(df5)
```
**Comment:** we can see how, on average, the hours of the day where the store is most visited are between 3pm and 5pm. This is valid for every day of the week. We can also notice that the graph follows pretty much the same shape during all week: the views start increasing quickly from midnight untill there is a 'plateau' for some hours (5am - 1pm). From 1pm there is a peak in the early afternoon that ends with a drastic drop of view at around 6pm that continues untill 12pm where we have the lowest values.
**To answer the question**, we suggest to invest in ads and othere marketing strategies in the early hours of the afternoon in each day, because, as the graphics suggests, those are the hours with the most affluence.
# QUESTION 6
***What's the conversion rate of your online store?***
First of all... What is the conversion rate?
**Conversion rate** of a product is given by the <ins>purchase</ins> rate over the number of times the product has been visited.
...and what is the purchase rate?
**Purchase rate** is the proportion of purchases versus a base metric such as users, sessions, email subscribers, etc. with a generic formula being PR = P/N where P is the number of purchases and N is the number of events during which a conversion could have occurred.
```
purchase_rate = methods.purchase_rate(df_names)
```
now we calculate the conversion rate
**RQ6.1** Find the overall conversion rate of your store.
```
conversion_rate = methods.conversion_rate(purchase_rate, df_names)
conversion_rate
```
The value returned is a good value to understand what is going on in our online store.. this is pretty low..
**RQ6.2** Plot the number of purchases of each category and show the conversion rate of each category in decreasing order.
```
purchase_dt = methods.percategory_show_purchases(df_names)
```
Now we will prepare our conversion rate per each category.
We want to have the number of categories, then we will make the conversion rate.
```
eachcategory_num = methods.number_categories(df_names)
eachcategory_num
```
the conversion rate is for each category in **decreasing** oder
```
cvrate = methods.conversion_rate_percategory(df_names, eachcategory_num)
cvrate
```
we can say that there are lower purchase rates presented in our online store
# QUESTION 7
The Pareto principle states that for many outcomes roughly 80% of consequences come from 20% of the causes. Also known as 80/20 rule, in e-commerce simply means that most of your business, around 80%, likely comes from about 20% of your customers.
**Prove that the pareto principle applies to your store.**
To answer this question we first have to import only the columns of the concatenated dataset that we need. In this case those are: *event_type*, *user_id*, *price* and *product_id*
```
df7 = methods.loadAllDatasets(df_names,['event_type','price','user_id','product_id'])
```
Now we can calculate the total profits of our store given from all the purchases made by all the users in two months. We also compute the 80% of them
```
purchases = df7[df7.event_type=='purchase'].price.sum()
pur_80 = .8*purchases
print(f'80% of the profit corresponds to {int(pur_80)}$')
```
We can now calculate how much the top 20% of the users (the ones that spent the most) spent during the whole months of October and November
```
profit_from_top_users = methods.proof_pareto_principle(df7)
print(f'20% of the top users spend {int(profit_from_top_users)}$')
```
As we can see, the two results are close.
This proves the Pareto Principle that states, in fact, how usually 80% of the profits for a store, come from 20% of the users.
| github_jupyter |
# Histograms
This notebook demonstrates simple use of histograms in wn.
### Set up libraries and load exemplar dataset
```
# load libraries
import os
import opendp.whitenoise.core as wn
import numpy as np
import math
import statistics
# establish data information
data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
var_names = ["age", "sex", "educ", "race", "income", "married"]
data = np.genfromtxt(data_path, delimiter=',', names=True)
age = list(data[:]['age'])
print("Dimension of dataset: " + str(data.shape))
print("Names of variables: " + str(data.dtype.names))
```
### Creating DP Releases of Histograms
The default method for generating a histogram in WhiteNoise is by releasing counts of each bin or category using the geometric mechanism. The geometric mechanism only returns integer values for any query, so resists some vulnerabilities of DP releases from floating point approximations (see Mironov 2012). It is also possible, however, to generate histograms from the more typical Laplace mechanism. We show both approaches below.
Here we generate histograms on three types of variables:
* A continuous variable, here `income`, where the set of numbers have to be divided into bins,
* A boolean or dichotomous variable, here `sex`, that can only take on two values,
* A categorical variable, here `education`, where there are distinct categories enumerated as strings.
Note the education variable is coded in the data on a scale from 1 to 16, but we're leaving the coded values as strings throughout this notebook.
```
income_edges = list(range(0, 100000, 10000))
education_categories = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16"]
with wn.Analysis() as analysis:
data = wn.Dataset(path = data_path, column_names = var_names)
nsize = 1000
income_histogram = wn.dp_histogram(
wn.to_int(data['income'], lower=0, upper=100),
edges = income_edges,
upper = nsize,
null_value = 150,
privacy_usage = {'epsilon': 0.5}
)
income_prep = wn.histogram(wn.to_int(data['income'], lower=0, upper=100000),
edges=income_edges, null_value =-1)
income_histogram2 = wn.laplace_mechanism(income_prep, privacy_usage={"epsilon": 0.5, "delta": .000001})
sex_histogram = wn.dp_histogram(
wn.to_bool(data['sex'], true_label="0"),
upper = nsize,
privacy_usage = {'epsilon': 0.5}
)
sex_prep = wn.histogram(wn.to_bool(data['sex'], true_label="0"), null_value = True)
sex_histogram2 = wn.laplace_mechanism(sex_prep, privacy_usage={"epsilon": 0.5, "delta": .000001})
education_histogram = wn.dp_histogram(
data['educ'],
categories = education_categories,
null_value = "-1",
privacy_usage = {'epsilon': 0.5}
)
education_prep = wn.histogram(data['educ'],
categories = education_categories, null_value = "-1")
education_histogram2 = wn.laplace_mechanism(education_prep, privacy_usage={"epsilon": 0.5, "delta": .000001})
analysis.release()
print("Income histogram Geometric DP release: " + str(income_histogram.value))
print("Income histogram Laplace DP release: " + str(income_histogram2.value))
print("Sex histogram Geometric DP release: " + str(sex_histogram.value))
print("Sex histogram Laplace DP release: " + str(sex_histogram2.value))
print("Education histogram Geometric DP release:" + str(education_histogram.value))
print("Education histogram Laplace DP release: " + str(education_histogram2.value))
```
We can see most obviously that the releases from the Geometric mechanism are integer counts, while the Laplace releases are floating point numbers.
Below, we will quickly create histograms of the actual private data, for a point of comparison to our differentially private releases:
```
import matplotlib.pyplot as plt
data = np.genfromtxt(data_path, delimiter=',', names=True)
income = list(data[:]['income'])
sex = list(data[:]['sex'])
education = list(data[:]['educ'])
# An "interface" to matplotlib.axes.Axes.hist() method
n_income, bins, patches = plt.hist(income, bins=list(range(0,110000,10000)), color='#0504aa',
alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Income')
plt.ylabel('Frequency')
plt.title('True Dataset Income Distribution')
plt.show()
n_sex, bins, patches = plt.hist(sex, bins=[-0.5,0.5,1.5], color='#0504aa',
alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Sex')
plt.ylabel('Frequency')
plt.title('True Dataset Sex Distribution')
plt.show()
n_educ, bins, patches = plt.hist(education, bins=list(range(1,19,1)), color='#0504aa',
alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Education')
plt.ylabel('Frequency')
plt.title('True Dataset Education Distribution')
plt.show()
```
Below we can see the differentially private releases of these variables in shades of red, against the "true" private counts in green.
```
import matplotlib.pyplot as plt
colorseq = ["forestgreen", "indianred", "orange", "orangered", "orchid"]
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
plt.ylim([-100,500])
#inccat = ["10k","20k","30k","40k","50k","60k","70k","80k","90k","100k"]
inccat = [10,20,30,40,50,60,70,80,90,100]
width=3
inccat_left = [x + width for x in inccat]
inccat_right = [x + 2*width for x in inccat]
ax.bar(inccat, n_income, width=width, color=colorseq[0], label='True Value')
ax.bar(inccat_left, income_histogram.value, width=width, color=colorseq[1], label='DP Geometric')
ax.bar(inccat_right, income_histogram2.value, width=width, color=colorseq[2], label='DP Laplace')
ax.legend()
plt.title('Histogram of Income')
plt.xlabel('Income, in thousands')
plt.ylabel('Count')
plt.show()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
plt.ylim([0,800])
sexcat = [0,1]
width = 0.2
sexcat_left = [x + width for x in sexcat]
sexcat_right = [x + 2*width for x in sexcat]
ax.bar(sexcat, n_sex, width=width, color=colorseq[0], label='True Value')
ax.bar(sexcat_left, sex_histogram.value, width=width, color=colorseq[1], label='DP Geometric')
ax.bar(sexcat_right, sex_histogram2.value, width=width, color=colorseq[2], label='DP Laplace')
ax.legend()
plt.title('Histogram of Sex')
plt.ylabel('Count')
plt.show()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
edcat = list(range(1,18))
width = 0.25
edcat_left = [x + width for x in edcat]
edcat_right = [x + 2*width for x in edcat]
ax.bar(edcat, n_educ, width=width, color=colorseq[0], label='True Value')
ax.bar(edcat_left, education_histogram.value, width=width, color=colorseq[1], label='DP Geometric')
ax.bar(edcat_right, education_histogram2.value, width=width, color=colorseq[2], label='DP Laplace')
ax.legend()
plt.title('Histogram of Education')
plt.xlabel('Educational Attainment Category')
plt.ylabel('Count')
plt.show()
```
## References
Mironov, Ilya. "On significance of the least significant bits for differential privacy." In Proceedings of the 2012 ACM conference on Computer and communications security, pp. 650-661. 2012.
| github_jupyter |
```
import os
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torchvision.transforms as T
import numpy as np
from PIL import Image
```
## Verifying image loading is the right format
```
path = '/home/yamins/.local/lib/python3.7/site-packages/model_tools/check_submission/images'
from model_tools.activations.pytorch import load_preprocess_images
import functools
preprocessing = functools.partial(load_preprocess_images, image_size=224)
load_preprocess_images?
impath = os.path.join(path, '10.png')
im = load_preprocess_images([impath], image_size=224)
im.shape
imval = im[0].swapaxes(0, 1).swapaxes(1, 2)
plt.imshow(imval)
load_preprocess_images
impath = os.path.join(path, '10.png')
imval = Image.open(impath)
impaths = [os.path.join(path, '%d.png' % i) for i in range(10, 20)]
imval
imval.mode
def load_preprocess_images(image_filepaths, image_size, **kwargs):
images = load_images(image_filepaths)
images = preprocess_images(images, image_size=image_size, **kwargs)
return images
def load_images(image_filepaths):
return [load_image(image_filepath) for image_filepath in image_filepaths]
def load_image(image_filepath):
with Image.open(image_filepath) as pil_image:
if 'L' not in pil_image.mode.upper() and 'A' not in pil_image.mode.upper() and 'P' not in pil_image.mode.upper():
return pil_image.copy()
else: # make sure potential binary images are in RGB
rgb_image = Image.new("RGB", pil_image.size)
rgb_image.paste(pil_image)
return rgb_image
def preprocess_images(images, image_size, **kwargs):
preprocess = torchvision_preprocess_input(image_size, **kwargs)
images = [preprocess(image) for image in images]
images = np.concatenate(images)
return images
def torchvision_preprocess_input(image_size, **kwargs):
from torchvision import transforms
return transforms.Compose([
transforms.Resize((image_size, image_size)),
torchvision_preprocess(**kwargs),
])
def torchvision_preprocess(normalize_mean=(0.485, 0.456, 0.406), normalize_std=(0.229, 0.224, 0.225)):
from torchvision import transforms
return transforms.Compose([
transforms.ToTensor(),
lambda img: 255 * img.unsqueeze(0)
])
imval = load_preprocess_images([impath], image_size=224)
imval.shape
imval.dtype
```
## looking at model outputs
```
from model_tools.activations.pytorch import PytorchWrapper
from r3m import load_r3m
r3m18cpu = load_r3m("resnet18") # resnet18, resnet34
r3m18cpu.eval();
r3m18cpu = r3m18cpu.module.to('cpu')
preprocessing = functools.partial(load_preprocess_images, image_size=224)
r3m18_wrapper = PytorchWrapper(identifier='r3m18', model=r3m18cpu, preprocessing=preprocessing)
r3m18_wrapper.image_size = 224
r3m18_wrapper
outval = r3m18_wrapper(impaths, layers=['convnet.conv1'])
outval.shape
64*112*112
outval = r3m18_wrapper(impaths, layers=['convnet.layer1.0.relu'])
outval.shape
64*56*56
outval = r3m18_wrapper([impath], layers=['convnet.layer4.1.relu'])
512*7*7
outval.shape
outval1 = r3m18_wrapper([impath], layers=['convnet.avgpool'])
outval1.shape
outval2 = r3m18_wrapper([impath], layers=['convnet.fc'])
outval2.shape
outval2
outvals2 = r3m18_wrapper(impaths, layers=['convnet.fc'])
outvals2.shape
layers = ['convnet.conv1',
'convnet.maxpool',
'convnet.layer1.1',
'convnet.layer2.1'
'convnet.layer3.1',
'convnet.layer4.1',
'convnet.fc']
impaths = [os.path.join(path, '%d.png' % i) for i in range(10, 20)]
outvals3 = r3m18_wrapper(impaths, layers=['convnet.avgpool'])
outvals3.shape
import r3m_pytorch
```
### resnet50
```
r3m34cpu = load_r3m("resnet34") # resnet18, resnet34
r3m34cpu = r3m34cpu.module.to('cpu')
r3m34_wrapper = PytorchWrapper(identifier='r3m34', model=r3m34cpu, preprocessing=preprocessing)
r3m34_wrapper.image_size = 224
outval = r3m34_wrapper(impaths[:1], layers=['convnet.layer1.0.relu'])
outval.shape
def load_preprocess_images_2(image_filepaths, crop_size):
"""
define custom pre-processing here since R3M does not normalize like other models
:seealso: r3m/example.py
"""
images = load_images(image_filepaths)
# preprocessing
transforms = T.Compose([
T.Resize(256),
T.CenterCrop(crop_size),
T.ToTensor(), # ToTensor() divides by 255
lambda img: img.unsqueeze(0),
])
images = [transforms(image) * 255.0 for image in images] # R3M expects image input to be [0-255]
images = np.concatenate(images)
return images
preprocessing2 = functools.partial(load_preprocess_images_2, crop_size=224)
r3m34cpu = load_r3m("resnet34") # resnet18, resnet34
r3m34cpu = r3m34cpu.module.to('cpu')
r3m34_wrapper = PytorchWrapper(identifier='r3m34', model=r3m34cpu, preprocessing=preprocessing2)
r3m34_wrapper.image_size = 224
outval = r3m34_wrapper(impaths[:1], layers=['convnet.layer1.0.relu'])
outval.shape
outval2 = r3m34_wrapper(impaths[:1], layers=['convnet.avgpool'])
outval2.shape
r3m50cpu = load_r3m("resnet50") # resnet18, resnet34
r3m50cpu = r3m50cpu.module.to('cpu')
r3m50_wrapper = PytorchWrapper(identifier='r3m50', model=r3m50cpu, preprocessing=preprocessing2)
r3m50_wrapper.image_size = 224
outval2 = r3m50_wrapper(impaths[:1], layers=['convnet.avgpool'])
outval2.shape
outval = r3m50_wrapper(impaths[:1], layers=['convnet.layer1.0.relu'])
outval.shape
256*56*56
```
| github_jupyter |
```
!pip install econml
# Some imports to get us started
import warnings
warnings.simplefilter('ignore')
# Utilities
import os
import urllib.request
import numpy as np
import pandas as pd
from networkx.drawing.nx_pydot import to_pydot
from IPython.display import Image, display
# Generic ML imports
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import GradientBoostingRegressor
# EconML imports
from econml.dml import LinearDML, CausalForestDML
from econml.cate_interpreter import SingleTreeCateInterpreter, SingleTreePolicyInterpreter
import matplotlib.pyplot as plt
%matplotlib inline
# Import the sample pricing data
file_url = "https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing_sample.csv"
train_data = pd.read_csv(file_url)
train_data.head()
#estimator inputs
train_data["log_demand"] = np.log(train_data["demand"])
train_data["log_price"] = np.log(train_data["price"])
Y = train_data["log_demand"].values
T = train_data["log_price"].values
X = train_data[["income"]].values # features
confounder_names = ["account_age", "age", "avg_hours", "days_visited", "friends_count", "has_membership", "is_US", "songs_purchased"]
W = train_data[confounder_names].values
# Get test data
X_test = np.linspace(0, 5, 100).reshape(-1, 1)
X_test_data = pd.DataFrame(X_test, columns=["income"])
```
## Create Causal Model
```
# initiate an EconML cate estimator
est = LinearDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor(),
featurizer=PolynomialFeatures(degree=2, include_bias=False))
# fit through dowhy
est_dw = est.dowhy.fit(Y, T, X=X, W=W, outcome_names=["log_demand"], treatment_names=["log_price"], feature_names=["income"],
confounder_names=confounder_names, inference="statsmodels")
# Visualize causal graph
try:
# Try pretty printing the graph. Requires pydot and pygraphviz
display(
Image(to_pydot(est_dw._graph._graph).create_png())
)
except:
# Fall back on default graph view
est_dw.view_model()
identified_estimand = est_dw.identified_estimand_
print(identified_estimand)
# initiate an EconML cate estimator
est_nonparam = CausalForestDML(model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor())
# fit through dowhy
est_nonparam_dw = est_nonparam.dowhy.fit(Y, T, X=X, W=W, outcome_names=["log_demand"], treatment_names=["log_price"],
feature_names=["income"], confounder_names=confounder_names, inference="blb")
```
# Test Estimate Robustness with DoWhy
## Add Random Common Cause
How robust are our estimates to adding another confounder?
```
res_random = est_nonparam_dw.refute_estimate(method_name="random_common_cause")
print(res_random)
```
How robust are our estimates to unobserved confounders
```
res_unobserved = est_nonparam_dw.refute_estimate(
method_name="add_unobserved_common_cause",
confounders_effect_on_treatment="linear",
confounders_effect_on_outcome="linear",
effect_strength_on_treatment=0.1,
effect_strength_on_outcome=0.1,
)
print(res_unobserved)
```
## Replace Treatment with a Random (Placebo) Variable
What happens our estimates if we replace the treatment variable with noise?
```
res_placebo = est_nonparam_dw.refute_estimate(
method_name="placebo_treatment_refuter", placebo_type="permute",
num_simulations=3
)
print(res_placebo)
```
## Remove a Random Subset of the Data
Do we recover similar estimates on subsets of the data?
```
res_subset = est_nonparam_dw.refute_estimate(
method_name="data_subset_refuter", subset_fraction=0.8,
num_simulations=3)
print(res_subset)
```
| github_jupyter |

# _*Qiskit Finance: Loading and Processing Stock-Market Time-Series Data*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
***
### Contributors
Jakub Marecek<sup>[1]</sup>
### Affiliation
- <sup>[1]</sup>IBMQ
### Introduction
Across many problems in finance, one starts with time series. Here, we showcase how to generate pseudo-random time-series, download actual stock-market time series from a number of common providers, and how to compute time-series similarity measures.
```
%matplotlib inline
from qiskit.finance.data_providers import *
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import datetime
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
data = RandomDataProvider(tickers=["TICKER1", "TICKER2"],
start = datetime.datetime(2016, 1, 1),
end = datetime.datetime(2016, 1, 30),
seed = 1)
data.run()
```
Once the data are loaded, you can run a variety of algorithms on those to aggregate the data. Notably, you can compute the covariance matrix or a variant, which would consider alternative time-series similarity measures based on <a target="_blank" href="https://en.wikipedia.org/wiki/Dynamic_time_warping">dynamic time warping</a> (DTW). In DTW, changes that vary in speed, e.g., one stock's price following another stock's price with a small delay, can be accommodated.
```
means = data.get_mean_vector()
print("Means:")
print(means)
rho = data.get_similarity_matrix()
print("A time-series similarity measure:")
print(rho)
plt.imshow(rho)
plt.show()
cov = data.get_covariance_matrix()
print("A covariance matrix:")
print(cov)
plt.imshow(cov)
plt.show()
```
If you wish, you can look into the underlying pseudo-random time-series using. Please note that the private class members (starting with underscore) may change in future releases of Qiskit.
```
print("The underlying evolution of stock prices:")
for (cnt, s) in enumerate(data._tickers):
plt.plot(data._data[cnt], label=s)
plt.legend()
plt.xticks(rotation=90)
plt.show()
for (cnt, s) in enumerate(data._tickers):
print(s)
print(data._data[cnt])
```
Clearly, you can adapt the number and names of tickers and the range of dates:
```
data = RandomDataProvider(tickers=["CompanyA", "CompanyB", "CompanyC"],
start = datetime.datetime(2015, 1, 1),
end = datetime.datetime(2016, 1, 30),
seed = 1)
data.run()
for (cnt, s) in enumerate(data._tickers):
plt.plot(data._data[cnt], label=s)
plt.legend()
plt.xticks(rotation=90)
plt.show()
```
### Access to closing-price time-series
While the access to real-time data usually requires a payment, it is possible
to access historical (adjusted) closing prices via Wikipedia and Quandl
free of charge, following registration at:
https://www.quandl.com/?modal=register
In the code below, one needs to specify actual tickers of actual NASDAQ
issues and the access token you obtain from Quandl; by running the code below, you agree to the Quandl terms and
conditions, including a liability waiver.
Notice that at least two tickers are required for the computation
of covariance and time-series matrices, but hundreds of tickers may go
beyond the fair usage limits of Quandl.
```
stocks = ["REPLACEME1", "REPLACEME2"]
wiki = WikipediaDataProvider(
token = "REPLACEME",
tickers = stocks,
stockmarket = StockMarket.NASDAQ,
start = datetime.datetime(2016,1,1),
end = datetime.datetime(2016,1,30))
wiki.run()
```
Once the data are loaded, you can again compute the covariance matrix or its DTW variants.
```
if wiki._n <= 1:
raise Exception("Not enough data to plot covariance or time-series similarity. Please use at least two tickers.")
rho = wiki.get_similarity_matrix()
print("A time-series similarity measure:")
print(rho)
plt.imshow(rho)
plt.show()
cov = wiki.get_covariance_matrix()
print("A covariance matrix:")
print(cov)
plt.imshow(cov)
plt.show()
```
If you wish, you can look into the underlying time-series using:
```
print("The underlying evolution of stock prices:")
for (cnt, s) in enumerate(stocks):
plt.plot(wiki._data[cnt], label=s)
plt.legend()
plt.xticks(rotation=90)
plt.show()
for (cnt, s) in enumerate(stocks):
print(s)
print(wiki._data[cnt])
```
### [Optional] Setup token to access recent, fine-grained time-series
If you would like to download professional data, you will have to set-up a token with one of the major providers. Let us now illustrate the data with NASDAQ Data on Demand, which can supply bid and ask prices in arbitrary resolution, as well as aggregates such as daily adjusted closing prices, for NASDAQ and NYSE issues.
If you don't have NASDAQ Data on Demand license, you can contact NASDAQ (cf. https://business.nasdaq.com/intel/GIS/Nasdaq-Data-on-Demand.html) to obtain a trial or paid license.
If and when you have access to NASDAQ Data on Demand using your own token, you should replace REPLACE-ME below with the token.
To assure the security of the connection, you should also have your own means of validating NASDAQ's certificates. The DataOnDemandProvider constructor has an optional argument `verify`, which can be `None` or a string or a boolean. If it is `None`, certify certificates will be used (default). If verify is a string, it should be pointing to a certificate for the HTTPS connection to NASDAQ (dataondemand.nasdaq.com), either in the form of a CA_BUNDLE file or a directory wherein to look.
```
from qiskit.finance.data_providers.data_on_demand_provider import StockMarket
try:
nasdaq = DataOnDemandProvider(token = "REPLACE-ME",
tickers = stocks,
stockmarket = StockMarket.NASDAQ,
start = datetime.datetime(2016,1,1),
end = datetime.datetime(2016,1,2))
nasdaq.run()
nasdaq.plot()
except QiskitFinanceError as e:
print(e)
print("You need to replace REPLACE-ME with a valid token.")
```
Another major vendor of stock market data is Exchange Data International (EDI), whose API can be used to query over 100 emerging and frontier markets that are Africa, Asia, Far East, Latin America and Middle East, as well as the more established ones. See:
https://www.exchange-data.com/pricing-data/adjusted-prices.php#exchange-coverage
for an overview of the coverage.
The access again requires a valid access token to replace REPLACE-ME below. The token can be obtained on a trial or paid-for basis at:
https://www.quandl.com/
In the following example, you need to replace TICKER1 and TICKER2 with valid tickers at the London Stock Exchange.
```
from qiskit.finance.data_providers.exchangedataprovider import StockMarket
try:
lse = ExchangeDataProvider(token = "REPLACE-ME",
tickers = ["TICKER1", "TICKER2"],
stockmarket = StockMarket.LONDON,
start = datetime.datetime(2019,1,1),
end = datetime.datetime(2019,1,30))
lse.run()
lse.plot()
except QiskitFinanceError as e:
print(e)
print("You need to replace REPLACE-ME with a valid token.")
```
For the actual use of the data, please see the <a href="../optimization/portfolio_optimization.ipynb">portfolio_optimization</a> or <a href="../optimization/portfolio_diversification.ipynb">portfolio_diversification</a> notebooks.
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/audio/transfer_learning_audio"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/transfer_learning_audio.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/audio/transfer_learning_audio.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/audio/transfer_learning_audio.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/yamnet/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
# Transfer learning with YAMNet for environmental sound classification
[YAMNet](https://tfhub.dev/google/yamnet/1) is a pre-trained deep neural network that can predict audio events from [521 classes](https://github.com/tensorflow/models/blob/master/research/audioset/yamnet/yamnet_class_map.csv), such as laughter, barking, or a siren.
In this tutorial you will learn how to:
- Load and use the YAMNet model for inference.
- Build a new model using the YAMNet embeddings to classify cat and dog sounds.
- Evaluate and export your model.
## Import TensorFlow and other libraries
Start by installing [TensorFlow I/O](https://www.tensorflow.org/io), which will make it easier for you to load audio files off disk.
```
!pip install tensorflow_io
import os
from IPython import display
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_io as tfio
```
## About YAMNet
[YAMNet](https://github.com/tensorflow/models/tree/master/research/audioset/yamnet) is a pre-trained neural network that employs the [MobileNetV1](https://arxiv.org/abs/1704.04861) depthwise-separable convolution architecture. It can use an audio waveform as input and make independent predictions for each of the 521 audio events from the [AudioSet](http://g.co/audioset) corpus.
Internally, the model extracts "frames" from the audio signal and processes batches of these frames. This version of the model uses frames that are 0.96 second long and extracts one frame every 0.48 seconds .
The model accepts a 1-D float32 Tensor or NumPy array containing a waveform of arbitrary length, represented as single-channel (mono) 16 kHz samples in the range `[-1.0, +1.0]`. This tutorial contains code to help you convert WAV files into the supported format.
The model returns 3 outputs, including the class scores, embeddings (which you will use for transfer learning), and the log mel [spectrogram](https://www.tensorflow.org/tutorials/audio/simple_audio#spectrogram). You can find more details [here](https://tfhub.dev/google/yamnet/1).
One specific use of YAMNet is as a high-level feature extractor - the 1,024-dimensional embedding output. You will use the base (YAMNet) model's input features and feed them into your shallower model consisting of one hidden `tf.keras.layers.Dense` layer. Then, you will train the network on a small amount of data for audio classification _without_ requiring a lot of labeled data and training end-to-end. (This is similar to [transfer learning for image classification with TensorFlow Hub](https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub) for more information.)
First, you will test the model and see the results of classifying audio. You will then construct the data pre-processing pipeline.
### Loading YAMNet from TensorFlow Hub
You are going to use a pre-trained YAMNet from [Tensorflow Hub](https://tfhub.dev/) to extract the embeddings from the sound files.
Loading a model from TensorFlow Hub is straightforward: choose the model, copy its URL, and use the `load` function.
Note: to read the documentation of the model, use the model URL in your browser.
```
yamnet_model_handle = 'https://tfhub.dev/google/yamnet/1'
yamnet_model = hub.load(yamnet_model_handle)
```
With the model loaded, you can follow the [YAMNet basic usage tutorial](https://www.tensorflow.org/hub/tutorials/yamnet) and download a sample WAV file to run the inference.
```
testing_wav_file_name = tf.keras.utils.get_file('miaow_16k.wav',
'https://storage.googleapis.com/audioset/miaow_16k.wav',
cache_dir='./',
cache_subdir='test_data')
print(testing_wav_file_name)
```
You will need a function to load audio files, which will also be used later when working with the training data. (Learn more about reading audio files and their labels in [Simple audio recognition](https://www.tensorflow.org/tutorials/audio/simple_audio#reading_audio_files_and_their_labels).
Note: The returned `wav_data` from `load_wav_16k_mono` is already normalized to values in the `[-1.0, 1.0]` range (for more information, go to [YAMNet's documentation on TF Hub](https://tfhub.dev/google/yamnet/1)).
```
# Utility functions for loading audio files and making sure the sample rate is correct.
@tf.function
def load_wav_16k_mono(filename):
""" Load a WAV file, convert it to a float tensor, resample to 16 kHz single-channel audio. """
file_contents = tf.io.read_file(filename)
wav, sample_rate = tf.audio.decode_wav(
file_contents,
desired_channels=1)
wav = tf.squeeze(wav, axis=-1)
sample_rate = tf.cast(sample_rate, dtype=tf.int64)
wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)
return wav
testing_wav_data = load_wav_16k_mono(testing_wav_file_name)
_ = plt.plot(testing_wav_data)
# Play the audio file.
display.Audio(testing_wav_data,rate=16000)
```
### Load the class mapping
It's important to load the class names that YAMNet is able to recognize. The mapping file is present at `yamnet_model.class_map_path()` in the CSV format.
```
class_map_path = yamnet_model.class_map_path().numpy().decode('utf-8')
class_names =list(pd.read_csv(class_map_path)['display_name'])
for name in class_names[:20]:
print(name)
print('...')
```
### Run inference
YAMNet provides frame-level class-scores (i.e., 521 scores for every frame). In order to determine clip-level predictions, the scores can be aggregated per-class across frames (e.g., using mean or max aggregation). This is done below by `scores_np.mean(axis=0)`. Finally, to find the top-scored class at the clip-level, you take the maximum of the 521 aggregated scores.
```
scores, embeddings, spectrogram = yamnet_model(testing_wav_data)
class_scores = tf.reduce_mean(scores, axis=0)
top_class = tf.math.argmax(class_scores)
inferred_class = class_names[top_class]
print(f'The main sound is: {inferred_class}')
print(f'The embeddings shape: {embeddings.shape}')
```
Note: The model correctly inferred an animal sound. Your goal in this tutorial is to increase the model's accuracy for specific classes. Also, notice that the model generated 13 embeddings, 1 per frame.
## ESC-50 dataset
The [ESC-50 dataset](https://github.com/karolpiczak/ESC-50#repository-content) ([Piczak, 2015](https://www.karolpiczak.com/papers/Piczak2015-ESC-Dataset.pdf)) is a labeled collection of 2,000 five-second long environmental audio recordings. The dataset consists of 50 classes, with 40 examples per class.
Download the dataset and extract it.
```
_ = tf.keras.utils.get_file('esc-50.zip',
'https://github.com/karoldvl/ESC-50/archive/master.zip',
cache_dir='./',
cache_subdir='datasets',
extract=True)
```
### Explore the data
The metadata for each file is specified in the csv file at `./datasets/ESC-50-master/meta/esc50.csv`
and all the audio files are in `./datasets/ESC-50-master/audio/`
You will create a pandas `DataFrame` with the mapping and use that to have a clearer view of the data.
```
esc50_csv = './datasets/ESC-50-master/meta/esc50.csv'
base_data_path = './datasets/ESC-50-master/audio/'
pd_data = pd.read_csv(esc50_csv)
pd_data.head()
```
### Filter the data
Now that the data is stored in the `DataFrame`, apply some transformations:
- Filter out rows and use only the selected classes - `dog` and `cat`. If you want to use any other classes, this is where you can choose them.
- Amend the filename to have the full path. This will make loading easier later.
- Change targets to be within a specific range. In this example, `dog` will remain at `0`, but `cat` will become `1` instead of its original value of `5`.
```
my_classes = ['dog', 'cat']
map_class_to_id = {'dog':0, 'cat':1}
filtered_pd = pd_data[pd_data.category.isin(my_classes)]
class_id = filtered_pd['category'].apply(lambda name: map_class_to_id[name])
filtered_pd = filtered_pd.assign(target=class_id)
full_path = filtered_pd['filename'].apply(lambda row: os.path.join(base_data_path, row))
filtered_pd = filtered_pd.assign(filename=full_path)
filtered_pd.head(10)
```
### Load the audio files and retrieve embeddings
Here you'll apply the `load_wav_16k_mono` and prepare the WAV data for the model.
When extracting embeddings from the WAV data, you get an array of shape `(N, 1024)` where `N` is the number of frames that YAMNet found (one for every 0.48 seconds of audio).
Your model will use each frame as one input. Therefore, you need to create a new column that has one frame per row. You also need to expand the labels and the `fold` column to proper reflect these new rows.
The expanded `fold` column keeps the original values. You cannot mix frames because, when performing the splits, you might end up having parts of the same audio on different splits, which would make your validation and test steps less effective.
```
filenames = filtered_pd['filename']
targets = filtered_pd['target']
folds = filtered_pd['fold']
main_ds = tf.data.Dataset.from_tensor_slices((filenames, targets, folds))
main_ds.element_spec
def load_wav_for_map(filename, label, fold):
return load_wav_16k_mono(filename), label, fold
main_ds = main_ds.map(load_wav_for_map)
main_ds.element_spec
# applies the embedding extraction model to a wav data
def extract_embedding(wav_data, label, fold):
''' run YAMNet to extract embedding from the wav data '''
scores, embeddings, spectrogram = yamnet_model(wav_data)
num_embeddings = tf.shape(embeddings)[0]
return (embeddings,
tf.repeat(label, num_embeddings),
tf.repeat(fold, num_embeddings))
# extract embedding
main_ds = main_ds.map(extract_embedding).unbatch()
main_ds.element_spec
```
### Split the data
You will use the `fold` column to split the dataset into train, validation and test sets.
ESC-50 is arranged into five uniformly-sized cross-validation `fold`s, such that clips from the same original source are always in the same `fold` - find out more in the [ESC: Dataset for Environmental Sound Classification](https://www.karolpiczak.com/papers/Piczak2015-ESC-Dataset.pdf) paper.
The last step is to remove the `fold` column from the dataset since you're not going to use it during training.
```
cached_ds = main_ds.cache()
train_ds = cached_ds.filter(lambda embedding, label, fold: fold < 4)
val_ds = cached_ds.filter(lambda embedding, label, fold: fold == 4)
test_ds = cached_ds.filter(lambda embedding, label, fold: fold == 5)
# remove the folds column now that it's not needed anymore
remove_fold_column = lambda embedding, label, fold: (embedding, label)
train_ds = train_ds.map(remove_fold_column)
val_ds = val_ds.map(remove_fold_column)
test_ds = test_ds.map(remove_fold_column)
train_ds = train_ds.cache().shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE)
val_ds = val_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
test_ds = test_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
```
## Create your model
You did most of the work!
Next, define a very simple [Sequential](https://www.tensorflow.org/guide/keras/sequential_model) model with one hidden layer and two outputs to recognize cats and dogs from sounds.
```
my_model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(1024), dtype=tf.float32,
name='input_embedding'),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(len(my_classes))
], name='my_model')
my_model.summary()
my_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="adam",
metrics=['accuracy'])
callback = tf.keras.callbacks.EarlyStopping(monitor='loss',
patience=3,
restore_best_weights=True)
history = my_model.fit(train_ds,
epochs=20,
validation_data=val_ds,
callbacks=callback)
```
Let's run the `evaluate` method on the test data just to be sure there's no overfitting.
```
loss, accuracy = my_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
```
You did it!
## Test your model
Next, try your model on the embedding from the previous test using YAMNet only.
```
scores, embeddings, spectrogram = yamnet_model(testing_wav_data)
result = my_model(embeddings).numpy()
inferred_class = my_classes[result.mean(axis=0).argmax()]
print(f'The main sound is: {inferred_class}')
```
## Save a model that can directly take a WAV file as input
Your model works when you give it the embeddings as input.
In a real-world scenario, you'll want to use audio data as a direct input.
To do that, you will combine YAMNet with your model into a single model that you can export for other applications.
To make it easier to use the model's result, the final layer will be a `reduce_mean` operation. When using this model for serving (which you will learn about later in the tutorial), you will need the name of the final layer. If you don't define one, TensorFlow will auto-define an incremental one that makes it hard to test, as it will keep changing every time you train the model. When using a raw TensorFlow operation, you can't assign a name to it. To address this issue, you'll create a custom layer that applies `reduce_mean` and call it `'classifier'`.
```
class ReduceMeanLayer(tf.keras.layers.Layer):
def __init__(self, axis=0, **kwargs):
super(ReduceMeanLayer, self).__init__(**kwargs)
self.axis = axis
def call(self, input):
return tf.math.reduce_mean(input, axis=self.axis)
saved_model_path = './dogs_and_cats_yamnet'
input_segment = tf.keras.layers.Input(shape=(), dtype=tf.float32, name='audio')
embedding_extraction_layer = hub.KerasLayer(yamnet_model_handle,
trainable=False, name='yamnet')
_, embeddings_output, _ = embedding_extraction_layer(input_segment)
serving_outputs = my_model(embeddings_output)
serving_outputs = ReduceMeanLayer(axis=0, name='classifier')(serving_outputs)
serving_model = tf.keras.Model(input_segment, serving_outputs)
serving_model.save(saved_model_path, include_optimizer=False)
tf.keras.utils.plot_model(serving_model)
```
Load your saved model to verify that it works as expected.
```
reloaded_model = tf.saved_model.load(saved_model_path)
```
And for the final test: given some sound data, does your model return the correct result?
```
reloaded_results = reloaded_model(testing_wav_data)
cat_or_dog = my_classes[tf.math.argmax(reloaded_results)]
print(f'The main sound is: {cat_or_dog}')
```
If you want to try your new model on a serving setup, you can use the 'serving_default' signature.
```
serving_results = reloaded_model.signatures['serving_default'](testing_wav_data)
cat_or_dog = my_classes[tf.math.argmax(serving_results['classifier'])]
print(f'The main sound is: {cat_or_dog}')
```
## (Optional) Some more testing
The model is ready.
Let's compare it to YAMNet on the test dataset.
```
test_pd = filtered_pd.loc[filtered_pd['fold'] == 5]
row = test_pd.sample(1)
filename = row['filename'].item()
print(filename)
waveform = load_wav_16k_mono(filename)
print(f'Waveform values: {waveform}')
_ = plt.plot(waveform)
display.Audio(waveform, rate=16000)
# Run the model, check the output.
scores, embeddings, spectrogram = yamnet_model(waveform)
class_scores = tf.reduce_mean(scores, axis=0)
top_class = tf.math.argmax(class_scores)
inferred_class = class_names[top_class]
top_score = class_scores[top_class]
print(f'[YAMNet] The main sound is: {inferred_class} ({top_score})')
reloaded_results = reloaded_model(waveform)
your_top_class = tf.math.argmax(reloaded_results)
your_inferred_class = my_classes[your_top_class]
class_probabilities = tf.nn.softmax(reloaded_results, axis=-1)
your_top_score = class_probabilities[your_top_class]
print(f'[Your model] The main sound is: {your_inferred_class} ({your_top_score})')
```
## Next steps
You have created a model that can classify sounds from dogs or cats. With the same idea and a different dataset you can try, for example, building an [acoustic identifier of birds](https://www.kaggle.com/c/birdclef-2021/) based on their singing.
Share your project with the TensorFlow team on social media!
| github_jupyter |
# Here we will learn topics like
1. Universal function
2. aggreagate function
3. Broadcasting
## Universal function
1. A big difference between python array and numpy array is execution speed.
2. python array iterate through each element and then process it.
3. numpy array use the concept of vectorized operation, which mean computing all elements of an array parallely
> we implement computaion in numpy array using universal function,"ufunc"
#### Why numpy function are fast
> it is beacuse here we use somthing called broadcasting, in broadcasting , first the code is compiled and then executed.This is the main difference between python and numpy , in python the code is compiled at run time, so this will take some time when we are processing a huge data, while in numpy , the huge data is already compiled during creation , so this will save time during execution
##### compare effectiveness between python array and numpy array
```
# we want to find reciprocal of array element
import numpy as np
np.random.seed(0)
def reciprocal (arr):
output = np.empty(len(arr))
for x in range(len(arr)):
output[x] = 1.0/arr[x]
return output
array = np.random.randint(5,20, size=500)
%timeit reciprocal(array)
# calculating same operation using numpy universal function
%timeit 1.0/array
```
#### Exploring more Ufunc
1. arithmatic opeartion are of two type unary and binary
2. unary opeartion like (-), ** exponent, % modulas
```
x = np.arange(4)
print("x =", x)
print("x + 5 =", x + 5)
print("x - 5 =", x - 5)
print("x * 2 =", x * 2)
print("x / 2 =", x / 2)
print("x // 2 =", x // 2) # floor division
print("-x = ", -x)
print("x ** 2 = ", x ** 2) #power
print("x % 2 = ", x % 2) # reminder
y = np.arange(-10,-5)
print('y',y)
print("python absolute function abs",np.abs(y) )
print('numpy absolute function absolute', np.absolute(y))
x = [1, 2, 3]
print("x =", x)
print("e^x =", np.exp(x))
print("2^x =", np.exp2(x))
print("3^x =", np.power(3, x))
x = [1, 2, 4, 10]
print("x =", x)
print("ln(x) =", np.log(x))
print("log2(x) =", np.log2(x))
print("log10(x) =", np.log10(x))
```
#### some more operation
> calling reduce on the add ufunc returns the sum of all elements in the array:
Specifying output
```
##### Specifying output
x = np.arange(5)
y = np.empty(5)
np.multiply(x, 10, out=y)
print(y)
x = [1,2,3,4,5]
print(np.multiply.reduce(x))
print(np.multiply.accumulate(x))
```
## Aggregation in Numpy
1. sum() and np.sum() and np.nansum()
2. remember np.sum() take cares of multidimensionality also . so never use python sum() with multidimensional array
```
big_array = np.random.rand(1000000)
%timeit sum(big_array)
%timeit np.sum(big_array)
```
| Function Name | NaN-safe Version | Description |
-----------------------|------------------------------|--------------------|
|np.sum | np.nansum | Compute sum of elements|
|np.prod | np.nanprod | Compute product of elements|
|np.mean |np.nanmean| Compute mean of elements|
|p.std| np.nanstd| Compute standard deviation|
|np.var| np.nanvar| Compute variance|
|np.min| np.nanmin| Find minimum value|
|np.max| np.nanmax| Find maximum value|
|np.argmin| np.nanargmin| Find index of minimum value|
|np.argmax| np.nanargmax| Find index of maximum value|
|np.median| np.nanmedian| Compute median of elements|
|np.percentile| np.nanpercentile| Compute rank-based statistics of elements|
|np.any| N/A| Evaluate whether any elements are true|
|np.all| N/A| Evaluate whether all elements are true|
# BroadCasting
#### Rules of Broadcasting
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
- Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is padded with ones on its leading (left) side.
- Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
- Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
M = np.ones((2, 3))
a = np.arange(3)
- Let's consider an operation on these two arrays. The shape of the arrays are
M.shape = (2, 3)
a.shape = (3,)
- We see by rule 1 that the array a has fewer dimensions, so we pad it on the left with ones:
M.shape -> (2, 3)
a.shape -> (1, 3)
- By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match:
M.shape -> (2, 3)
a.shape -> (2, 3)
- The shapes match, and we see that the final shape will be (2, 3):
M = np.ones((3, 2))
a = np.arange(3)
- This is just a slightly different situation than in the first example: the matrix M is transposed. How does this affect the calculation? The shape of the arrays are
M.shape = (3, 2)
a.shape = (3,)
- Again, rule 1 tells us that we must pad the shape of a with ones:
M.shape -> (3, 2)
a.shape -> (1, 3)
- By rule 2, the first dimension of a is stretched to match that of M:
M.shape -> (3, 2)
a.shape -> (3, 3)
- Now we hit rule 3–the final shapes do not match, so these two arrays are incompatible, as we can observe by attempting this operation:
#### Practice of broadcasting
```
#Lets say we have 10 sample which measure three hight of three flower, so we need an array of 10 rows and 3 column
#x = np.random.randint(1,50,size=30).reshape(10,3)
x= np.random.random((10,3))
#print(x)
# to know the mean hight of each flower
y= np.empty(3)
np.mean(x,axis =0,out=y) # axis zero mean row wise sum
print(y)
# to verify if mean is correct, we all know sum of( mean-point ) is always 0 when all point are between 0,1
xcentered= x-y
print(xcentered.mean(0))
```
| github_jupyter |
Azure ML & Azure Databricks notebooks by Parashar Shah.
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.
**install azureml-sdk**
* Source: Upload Python Egg or PyPi
* PyPi Name: `azureml-sdk[databricks]`
* Select Install Library
```
import azureml.core
# Check core SDK version number - based on build number of preview/master.
print("SDK version:", azureml.core.VERSION)
```
Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the Azure Machine Learning Workspace.
You can get the value of your Azure subscription ID from the Azure Portal, and then selecting Subscriptions from the menu on the left.
For the resource_group, use the name of the resource group that contains your Azure Databricks Workspace.
NOTE: If you provide a resource group name that does not exist, the resource group will be automatically created. This may or may not succeed in your environment, depending on the permissions you have on your Azure Subscription.
```
# subscription_id = "<your-subscription-id>"
# resource_group = "<your-existing-resource-group>"
# workspace_name = "<a-new-or-existing-workspace; it is unrelated to Databricks workspace>"
# workspace_region = "<your-resource group-region>"
# Set auth to be used by workspace related APIs.
# For automation or CI/CD ServicePrincipalAuthentication can be used.
# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py
auth = None
# import the Workspace class and check the azureml SDK version
# exist_ok checks if workspace exists or not.
from azureml.core import Workspace
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
auth = auth,
exist_ok=True)
#get workspace details
ws.get_details()
ws = Workspace(workspace_name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
auth = auth)
# persist the subscription id, resource group name, and workspace name in aml_config/config.json.
ws.write_config()
#if you need to give a different path/filename please use this
#write_config(path="/databricks/driver/aml_config/",file_name=<alias_conf.cfg>)
help(Workspace)
# import the Workspace class and check the azureml SDK version
from azureml.core import Workspace
ws = Workspace.from_config(auth = auth)
#ws = Workspace.from_config(<full path>)
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
| github_jupyter |
# 🔢 Vectorizing Guide
Firstly, we must import what we need from Relevance AI
```
from relevanceai import Client
from relevanceai.utils.datasets import (
get_iris_dataset,
get_palmer_penguins_dataset,
get_online_ecommerce_dataset,
)
client = Client()
```
## Example 1
For this first example we going to work with a purely numeric dataset. The Iris dataset contains 4 numeric features and another text column with the label
```
iris_documents = get_iris_dataset()
dataset = client.Dataset("iris")
dataset.insert_documents(iris_documents, create_id=True)
```
Here we can see the dataset schema, pre-vectorization
```
dataset.schema
```
Vectorizing is as simple specifying `create_feature_vector=True`
While species is a text feature, we do not need to vectorize this. Besides, smart typechecking recognises this field as a text field we would not usually vectorize.
`create_feature_vector=True` is what creates our "document" vectors. This concatenates all numeric/vector fields in a single "document" vector. This new vector_field is always called `f"_dim{n_dims}_feature_vector_"`, with n_dims being the size of the concatenated vector.
Furthermore, for nuermic stability accross algorithms, sklearn's StandardScaler is applied to the concatenated vector field. If the concatenated size of a vector field is >512 dims, PCA is automatically applied.
```
dataset.vectorize(create_feature_vector=True)
```
### or
```
dataset.vectorize(fields=["numeric"], create_feature_vector=True)
```
You can see below that the dataset schema has been altered accordingly
```
dataset.schema
```
## Example 2
For this second example we going to work with a mixed numeric and text dataset. The Palmer Penguins dataset contains several numeric features and another text column called "Comments"
```
penguins_documents = get_palmer_penguins_dataset()
dataset.insert_documents(penguins_documents, create_id=True)
```
We must install the default Encoders for text vectorizing from vectorhub
```
!pip install vectorhub[encoders-text-tfhub-windows] # If you are on windows
!pip install vectorhub[encoders-text-tfhub] # other
```
No arguments automatically detects what text and image fieds are presetn in your dataset. Since this is a new function, its typechecking could be faulty. If need be, specifiy the data types in the same format as the schema with `_text_` denoting text_fields and `_image_` denoting image fields.
```
dataset.vectorize()
```
### or
```
dataset.vectorize(fields=["Comments"], create_feature_vector=True)
```
| github_jupyter |
# **Amazon Lookout for Equipment** - Getting started
*Part 6 - Cleanup*
## Initialization
---
This repository is structured as follow:
```sh
. lookout-equipment-demo
|
├── data/
| ├── interim # Temporary intermediate data are stored here
| ├── processed # Finalized datasets are usually stored here
| | # before they are sent to S3 to allow the
| | # service to reach them
| └── raw # Immutable original data are stored here
|
├── getting_started/
| ├── 1_data_preparation.ipynb
| ├── 2_dataset_creation.ipynb
| ├── 3_model_training.ipynb
| ├── 4_model_evaluation.ipynb
| ├── 5_inference_scheduling.ipynb
| └── 6_cleanup.ipynb <<< THIS NOTEBOOK <<<
|
└── utils/
└── lookout_equipment_utils.py
```
### Notebook configuration update
Amazon Lookout for Equipment being a very recent service, we need to make sure that we have access to the latest version of the AWS Python packages. If you see a `pip` dependency error, check that the `boto3` version is ok: if it's greater than 1.17.48 (the first version that includes the `lookoutequipment` API), you can discard this error and move forward with the next cell:
```
!pip install --quiet --upgrade boto3 awscli aiobotocore botocore sagemaker tqdm
import boto3
print(f'boto3 version: {boto3.__version__} (should be >= 1.17.48 to include Lookout for Equipment API)')
# Restart the current notebook to ensure we take into account the previous updates:
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
```
### Imports
```
import config
import datetime
import sagemaker
import sys
import time
# Helper functions for managing Lookout for Equipment API calls:
sys.path.append('../utils')
import lookout_equipment_utils as lookout
ROLE_ARN = sagemaker.get_execution_role()
REGION_NAME = boto3.session.Session().region_name
BUCKET = config.BUCKET
PREFIX_TRAINING = config.PREFIX_TRAINING
PREFIX_LABEL = config.PREFIX_LABEL
PREFIX_INFERENCE = config.PREFIX_INFERENCE
DATASET_NAME = config.DATASET_NAME
MODEL_NAME = config.MODEL_NAME
INFERENCE_SCHEDULER_NAME = config.INFERENCE_SCHEDULER_NAME
lookout_client = lookout.get_client(region_name=REGION_NAME)
```
## Deleting resources
---
### Deleting inference scheduler
Using the [**DeleteInferenceScheduler**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_DeleteInferenceScheduler.html) API to delete existing scheduler:
```
# Stopping the scheduler in case it's running:
try:
print('Stopping the scheduler...')
scheduler = lookout.LookoutEquipmentScheduler(
scheduler_name=INFERENCE_SCHEDULER_NAME,
model_name=MODEL_NAME,
region_name=REGION_NAME
)
scheduler.stop()
scheduler.delete()
except Exception as e:
error_code = e.response['Error']['Code']
if (error_code == 'ResourceNotFoundException'):
print(' > Scheduler not found, nothing to do')
```
### Deleting the trained models
Using the [**DeleteModel**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_DeleteModel.html) API to remove the model trained in this tutorial:
```
for model in lookout.list_models_for_datasets(model_name_prefix=MODEL_NAME):
print(f'Deleting model {model}...')
try:
lookout_client.delete_model(ModelName=MODEL_NAME)
print(f'Model "{MODEL_NAME}" is deleted successfully.')
except Exception as e:
error_code = e.response['Error']['Code']
# If the dataset is used by existing models and we asked a
# forced delete, we also delete the associated models before
# trying again the dataset deletion:
if (error_code == 'ConflictException'):
print(('Model is currently being used (a training might be in '
'progress. Wait for the process to be completed and '
'retry.'))
```
### Deleting the dataset
Using the [**DeleteDataset**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_DeleteDataset.html) API to remove the dataset:
```
# Let's try to delete this dataset:
try:
lookout_client.delete_dataset(DatasetName=DATASET_NAME)
print(f'Dataset "{DATASET_NAME}" is deleted successfully.')
except Exception as e:
error_code = e.response['Error']['Code']
if (error_code == 'ConflictException'):
print(('Dataset is used by at least a model, delete the '
'associated model(s) before deleting this dataset.'))
```
### Cleaning the S3 bucket
Uncomment and run the following cell to clean the S3 bucket from the prefixes used throughout this tutorial for training data, label data and inference data. You can stop here if you would like to keep the data generated for further experimentation:
```
# !aws s3 rm s3://$BUCKET/$PREFIX_INFERENCE --recursive
# !aws s3 rm s3://$BUCKET/$PREFIX_TRAINING --recursive
# !aws s3 rm s3://$BUCKET/$PREFIX_LABEL --recursive
```
## Conclusion
---
Use this notebook to cleanup all the ressources created while running this series of tutorials.
| github_jupyter |
# Convert a SolidMesh into its BoundaryRepresentation
The goal is to transform a volumetric mesh into a model as defined here: https://docs.geode-solutions.com/datamodel
The core of the problem is to identify and to extract the topological information from the mesh.
There are two ways to realize this identification:
- from the polyhedra adjacencies;
- from Attribute values on the polyhedra.
## Import modules
You need to import OpenGeode and Geode-Conversion modules.
```
# Fix to better handle import since Python 3.8 on Windows
import os, sys, platform
if sys.version_info >= (3,8,0) and platform.system() == "Windows":
for path in [x.strip() for x in os.environ['PATH'].split(';') if x]:
os.add_dll_directory(path)
import opengeode
import geode_conversion
```
## Conversion from polyhedra adjacencies
In this case, we want to convert the micro-topology, meaning the adjacency relationships at polyhedron level into the model topology, meaning a set of components (volumic, surfacic...) with their connectivity relationships.
```
# Load solid and convert it
solid = opengeode.load_tetrahedral_solid3D("model_as_solid.og_tso3d")
brep_from_solid = geode_conversion.convert_solid_into_brep_from_adjacencies(solid)
opengeode.save_brep(brep_from_solid, "brep_from_solid.og_brep")
# Display information on the model
print(brep_from_solid.nb_corners())
print(brep_from_solid.nb_lines())
print(brep_from_solid.nb_surfaces())
print(brep_from_solid.nb_blocks())
```
## Conversion from Attribute values
In this case, an Attribute is attached to the solid where each attribute value is stored in a polyhedron.
Polyhedra with the same values will end up in the same Block in the boundary representation.
From these Blocks, the corresponding boundary surfaces will be generated.
```
# Load solid and convert it
solid = opengeode.load_tetrahedral_solid3D("model_as_solid.og_tso3d")
brep_from_solid = geode_conversion.convert_solid_into_brep_from_attribute(solid, "attribute_name")
opengeode.save_brep(brep_from_solid, "brep_from_solid.og_brep")
# Display information on the model
print(brep_from_solid.nb_corners())
print(brep_from_solid.nb_lines())
print(brep_from_solid.nb_surfaces())
print(brep_from_solid.nb_blocks())
```
## Conversion from model to solid
Inversely, if you have a model with a volumetric mesh, you can also convert it into a solid.
```
converted_solid = geode_conversion.convert_brep_into_solid( brep_from_solid )
opengeode.save_polyhedral_solid3D(converted_solid, "solid_from_brep.og_pso3d")
```
| github_jupyter |
```
import pandas as pd # To convert data into pandas dataframe
import numpy as np # For data and large type of arrays manipulation
import matplotlib.pyplot as plt #For data visualisation
import seaborn as sns # For data visualisation
import plotly.express as px #For data visualisation
```
# Data Preprocessing
```
# df=pd.read_csv('../Master_Training_File/Zomato.csv') #Convert csv file to Pandas DataFrame
df=pd.read_csv("sample_data/Zomato.csv") #Convert csv file to Pandas DataFrame
df.head(7) #Looking at the head
for col in df.columns:
print(f'Column: {col} has',len(df[col].unique()),'unique Values.') #Seeing the unique values of every column
df.isnull().sum() #Seeing the null values
df.duplicated().sum() # Sum the duplicated row
#By seeing at unique and null values I decide to drop serial,url,address, name phone, reviews_list,dish_liked
df.drop(['serial','url','address' ,'phone', 'reviews_list','dish_liked'],axis=1,inplace=True)
df.head() #Seeing the head to confirm drop columns
df.info() #Seeing the info to see data types
df.dropna(inplace=True) #Drop all Nan values
df.rate.unique() #Seeing the unique values
df['rate']=df['rate'].str.replace('/5','')#Rplace '/5' with '' to convert it into number(float in this case)
df=df.loc[df.rate!='NEW'] #DataFrame without NEW values in rate colum
df['rate']=df['rate'].astype('float64')
df.rate.unique() #Seeing the unique values to confirm
df.rename(columns={'listed_in_type':'type','listed_in_city':'city','approx_cost_for_two_people':'cost'},inplace=True) #Rename the column for the simplicity
df.head() #Seeing the head to confirm the column name changes
df['cost'].unique() #Seeing the unique values for cost
remove_comma= lambda x:x.replace(',','') if type(x)==np.str else x #create a lambda function to remove ',' with empty string ''
df.cost=df.cost.apply(remove_comma) #apply the funtion remove_comma to extract comma out of it.
df.cost=df.cost.astype('int64')
df.cost.unique() #Seeing the unique values to confirm the above operation
df2=pd.get_dummies(df,columns=['online_order','book_table','type','city'],drop_first=True) #Convert 'online_order','book_table','type','city' columns to 0 and 1 by using get_dumies
df2.head() # To confirm get_dummies function
def ordinal_encoding(df,columns): # function to convert given columns to ordinal encoding
for col in columns:
df[col]=df[col].factorize()[0]
return df
df3=ordinal_encoding(df2,columns=['location','name','rest_type','cuisines','menu_item']) #ordinal encoding the columns
df3.head() #Seeing the head to confirm ordinal_encoding done or not
df3.dtypes #Confirming the datatypes of each column
```
# Regression Analysis and Splitting the Data set
```
df3.describe().T #Seeing the data Statistically to extract some insights from the data
from sklearn.model_selection import train_test_split # Import library To split the whole data into train and test
X=df3.drop(['cost'],axis=1) # Seperate the independent features
Y=df3['cost'] # Seperate the dependent ('cost') from independent feature
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
```
# EDA (Exploratory Data Analysis)
```
plt.figure(figsize=(16,9)) # To see the image somewhat big
sns.heatmap(df3.corr(),annot=True) # plotting heatmap with the features correlation; in short seeing the features how much they correlate with each other
sns.countplot(df['online_order']) # Draw countplot to see online_order feature counts
fig=plt.gcf() # to adjust the figure
fig.set_size_inches(10,10) # to make the figure big
```
Here we can see that more than 60% people prefer online_order
```
sns.countplot(df['book_table']) # Draw countplot to see book_table feature counts
fig=plt.gcf() # to adjust the figure
fig.set_size_inches(10,10) # to make the figure big
```
Here we can see that more than 75% people prefer to not book the table
```
sns.countplot(df['city']) # Draw countplot to see city feature counts
fig=plt.gcf() # to adjust the figure
fig.set_size_inches(10,10) # to make the figure big
```
The data that we have recieved is biased more than towards Banashankari city
```
sns.countplot(df['book_table'],hue=df3['rate']) # Draw countplot to see book_table feature with rate
fig=plt.gcf() # to adjust the figure
fig.set_size_inches(10,10) # to make the figure big
```
Here the data that we have seen is very messy so we have better approach than this
```
y_=pd.crosstab(df3['rate'],df['book_table']) # creating a dataframe of crosstab between rate and book_table
y_.plot(kind='bar',stacked=True,color=['yellow','red']) # creating a bar plot with stacked equal to True
fig=plt.gcf() # to adjust the figure
fig.set_size_inches(10,10) # to make the figure big
```
The people that book table is usually giving the high rate but the people that does not book the table are not giving much rate, this shows that the book_table customers are more satishfied than the customers who does not book the table.
```
y_=pd.crosstab(df3['rate'],df['online_order']) # creating a dataframe of crosstab between rate and online_order
y_.plot(kind='bar',stacked=True,color=['yellow','red'],figsize =(15,6)) # creating a bar plot with stacked equal to True
```
Here we can see no such relation, means online_order with yes and no are somewhat equally rated
```
plt.figure(figsize=(16,9)) #Set the figure size
chart=sns.countplot(x=df['location']) # Countplot with location feature
chart.set_xticklabels(labels=df['location'].unique(),rotation=90) # Setting the rotation of labels
None #To not return xticklabels
y_=pd.crosstab(df3['rate'],df['location']) # creating a dataframe of crosstab between rate and location
y_.plot(kind='bar',stacked=True,figsize =(15,9)) # creating a bar plot with stacked equal to True
px.bar(pd.crosstab(df3['rate'],df['location'])) # Creating this same thing with plotly for better understanding
y_=pd.crosstab(df3['rate'],df['rest_type']) # creating a dataframe of crosstab between rate and rest_type
y_.plot(kind='bar',stacked=True,figsize =(15,9),) # creating a bar plot with stacked equal to True
plt.figure(figsize=(16,9)) #To increase the size of image
sns.countplot(df['type']) # draw countplot
y_=pd.crosstab(df3['rate'],df['type']) # creating a dataframe of crosstab between rate and type
y_.plot(kind='bar',stacked=True,figsize =(15,9),) # creating a bar plot with stacked equal to True
plt.title('Type of services') # Create the title of the figure
px.scatter(data_frame=df3,x=df3['rate'],y=df3['cost']) # Plot scatter plot with plotly for the analysis between rate and cost
plt.figure(figsize=(16,9)) # To increase the size of the image
sns.barplot(df['name'].value_counts()[:20],df['name'].value_counts()[:20].index) # Plotting barplot of 20 famous hotels
plt.title('Top 20 hotels in Banglure') # Create the title of the figure
```
## Seeing the accuracy without applying hyperparameter and much feature engineering and data Preprocessing
```
from sklearn.metrics import r2_score,mean_absolute_error,mean_squared_error #importing different metrics for analysis
def compute_score(y_true,y_pred): # create the function that will compute the r2_score,mean_absolute_error and mean_squared_error
'''This will compute the r2_score,mean_absolute_error and mean_squared_error
args:
y_true: provide the true y label means y_test
y_pred: provide the pred y label'''
r2=r2_score(y_true,y_pred)
mae=mean_absolute_error(y_true,y_pred)
mse=mean_squared_error(y_true,y_pred)
print(f'r2_score is: {r2}\n mean_absolute_error is: {mae}\n mean_squared_error is {mse}')
```
### Seeing the prediction by Linear Models
```
from sklearn.linear_model import LinearRegression # import LinearRegression for training
lr=LinearRegression() # LinearRegression model initialize
lr.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=lr.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.linear_model import Ridge # import RidgeRegression for training
r=Ridge() # Ridge model initialize
r.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=r.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.linear_model import Lasso # import LassoRegression for training
l=Lasso() # Lasso model initialize
l.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=l.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
### Seeing the predictions by Trees
```
from sklearn.ensemble import RandomForestRegressor # import RandomForestRegressor for training
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.tree import DecisionTreeRegressor # DecisionTreeRegressor model initialize
dtr=DecisionTreeRegressor() # DecisionTreeRegressor model initialize
dtr.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=dtr.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.tree import ExtraTreeRegressor # ExtraTreeRegressor model initialize
etr=ExtraTreeRegressor() # ExtraTreeRegressor model initialize
etr.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=etr.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.ensemble import AdaBoostRegressor # AdaBoostRegressor model initialize
ab=AdaBoostRegressor() # AdaBoostRegressor model initialize
ab.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=ab.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
### Seeing the prediction by Support Vector Machines
```
from sklearn.svm import SVR # Support Vector Regressor model initialize
sv=SVR() # SupportVectorRegressor model initialize
sv.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=sv.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
# Result: As we have seen form above RandomForestRegressor doing the best prediction
## Analysis of Models by doing some feature engineering on this existing data
```
def correlation(dataset, threshold): # Seeing the most correlated features
'''This will take the dataset and the thereshold value for the correlation
and return the column that have the correlation greater than the threshold'''
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
correlation(dataset=df3,threshold=0.7) # calling the correlation funtion to get high correlated columns
rf.fit(x_train.drop(['city_Bannerghatta Road', 'type_Dine-out'],axis=1),y_train) # Apply Random forest after droping high correlated features
y_pred=rf.predict(x_test.drop(['city_Bannerghatta Road', 'type_Dine-out'],axis=1)) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
# Extracting the information regarding importance of each feature in the prediction
from sklearn.feature_selection import mutual_info_regression
# determine the mutual information
mutual_info = mutual_info_regression(x_train, y_train)
mutual_info
mutual_info_df=pd.Series(mutual_info) # convert mutual_infoto pandas series
mutual_info_df.index=x_train.columns # initialize the columns
mutual_info_df.sort_values(ascending=True).plot(kind='barh',figsize=(16,9)) # Draw the figure
```
### Note:
* we have convert 'location','name','rest_type','cuisines','menu_item' features into ordinal encoding but in reality they don't have any kind of relation with them.
* So, this may be a mistake beacause as we know when catagorical features doesn't have any kind of realtion then we use something called as one hot encoding
```
dummy=pd.get_dummies(df,columns=['location','name','rest_type','cuisines','menu_item','online_order','book_table','type','city'],drop_first=True)
X=dummy.drop(['cost'],axis=1) # Seperate the independent features
Y=dummy['cost'] # Seperate the dependent ('cost') from independent feature
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.ensemble import ExtraTreesRegressor# ExtraTreesRegressor importing the ExtraTreesRegressor
et=ExtraTreesRegressor() # ExtraTreesRegressor model initialize
et.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=et.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
!pip install category_encoders
'''
So we convert these features into category_encoders with some base because there is so much catagories and our model may get
the problem of Curse of Dimensionality.
'''
import category_encoders as ce
encoder= ce.BaseNEncoder(cols=['location','name','rest_type','cuisines','menu_item','online_order','book_table','type','city'],return_df=True,base=5)
data_encoded=encoder.fit_transform(df)
data_encoded
X=data_encoded.drop(['cost'],axis=1) # Seperate the independent features
Y=data_encoded['cost'] # Seperate the dependent ('cost') from independent feature
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
### Note: Dummy catagorical encoding is doing better so we apply this in whole dataset
## Now applying the Standralization techniques
```
from sklearn.preprocessing import StandardScaler # Import The StandardScaler
ss=StandardScaler() # Initialize The StandardScaler
x_ss=ss.fit_transform(dummy.drop(['cost'],axis=1)) # fit_transform the StandardScaler with dummy encoded data
ss_df=pd.DataFrame(data=x_ss,columns=dummy.drop(['cost'],axis=1).columns) # Convert the numpy array into dataframe
x_train,x_test,y_train,y_test=train_test_split(ss_df,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
from sklearn.ensemble import RandomForestRegressor # import RandomForestRegressor for training
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.preprocessing import MinMaxScaler # Import The MinMaxScaler
mms=MinMaxScaler() # Initialize The MinMaxScaler
x_mms=mms.fit_transform(dummy.drop(['cost'],axis=1)) # fit_transform the MinMaxScaler with dummy encoded data
mms_df=pd.DataFrame(data=x_mms,columns=dummy.drop(['cost'],axis=1).columns) # Convert the numpy array into dataframe
x_train,x_test,y_train,y_test=train_test_split(mms_df,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
from sklearn.ensemble import RandomForestRegressor # import RandomForestRegressor for training
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.preprocessing import RobustScaler # Import The RobustScaler
rs=RobustScaler() # Initialize The RobustScaler
x_rs=rs.fit_transform(dummy.drop(['cost'],axis=1)) # fit_transform the RobustScaler with dummy encoded data
rs_df=pd.DataFrame(data=x_rs,columns=dummy.drop(['cost'],axis=1).columns) # Convert the numpy array into dataframe
x_train,x_test,y_train,y_test=train_test_split(rs_df,Y,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
from sklearn.ensemble import RandomForestRegressor # import RandomForestRegressor for training
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
### Result:
* So here Standralization, Min_Max_Scalar(Normalization) and RobustScaler don't do any good or bad so we leave it and go with the original dataset(dummy)
# Feature Selection
```
dummy_highly_correlated_lst=list(correlation(dataset=dummy,threshold=0.7)) # calling the correlation funtion (it is definde in 'Analysis of Models by doing some feature engineering on this existing data' section) to get high correlated columns
len(dummy_highly_correlated_lst)
X_dummy=dummy.drop(['cost'],axis=1) # Seperate the independent features
Y_dummy=dummy['cost'] # Seperate the dependent ('cost') from independent feature
x_train_dummy,x_test_dummy,y_train_dummy,y_test_dummy=train_test_split(X_dummy,Y_dummy,test_size=0.2,random_state=42) #split into x_train,x_test,y_train,y_test with 20% of test size
rf.fit(x_train_dummy.drop(dummy_highly_correlated_lst,axis=1),y_train_dummy) # Apply Random forest after droping high correlated features
y_pred_dummy=rf.predict(x_test_dummy.drop(dummy_highly_correlated_lst,axis=1)) # predict the x_test
compute_score(y_test_dummy,y_pred_dummy) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.feature_selection import SelectFromModel # For feature Selection
from sklearn.linear_model import Lasso # Import Lasso as feature Selection model
sel_features=SelectFromModel(Lasso(alpha=0.005)) # Initialize SelectFromModel along Lasso with 0.005 alpha values
sel_features.fit(X_dummy,Y_dummy) # Fiting the model
X_dummy_lasso_list=list(X_dummy.columns[sel_features.get_support()]) # returns the colums choose by Lasso
rf.fit(x_train_dummy[X_dummy_lasso_list],y_train_dummy) # Apply Random forest after droping high correlated features
y_pred_dummy=rf.predict(x_test_dummy[X_dummy_lasso_list]) # predict the x_test
compute_score(y_test_dummy,y_pred_dummy) # computing the r2_score,mean_absolute_error and mean_squared_error
```
### Here the Accuracy metrics are increase little bit and it's also reduce the dimension/Columns that are a good sign for us so we keep this change.
```
dummy_1=dummy[X_dummy_lasso_list].copy() # Make a copy of this dummy with lasso suggested features.
x_train_dummy_1=x_train_dummy[X_dummy_lasso_list].copy() # Make a copy of this train with lasso suggested features.
x_test_dummy_1=x_test_dummy[X_dummy_lasso_list].copy() # Make a copy of this test with lasso suggested features.
```
### Now see the Accuracy with boosting techniques
```
from sklearn.ensemble import AdaBoostRegressor # AdaBoostRegressor model initialize
ab=AdaBoostRegressor() # AdaBoostRegressor model initialize
ab.fit(x_train_dummy_1,y_train) # fitting the x_train,y_train
y_pred=ab.predict(x_test_dummy_1) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
gb=GradientBoostingRegressor() # GradientBoostingRegressor model initialize
gb.fit(x_train_dummy_1,y_train) # fitting the x_train,y_train
y_pred=ab.predict(x_test_dummy_1) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from xgboost import XGBRegressor # importing the XGBRegressor for training
xg=XGBRegressor() # XGBRegressor model initialize
xg.fit(x_train_dummy_1.to_numpy(),y_train.to_numpy()) # fitting the x_train,y_train
y_pred=xg.predict(x_test_dummy_1.to_numpy()) # predict the x_test
compute_score(y_test.to_numpy(),y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.ensemble import RandomForestRegressor # RandomForestRegressor importing the ExtraTreesRegressor
rf=RandomForestRegressor() # RandomForestRegressor model initialize
rf.fit(x_train_dummy_1,y_train) # fitting the x_train,y_train
y_pred=rf.predict(x_test_dummy_1) # predict the x_test
compute_score(y_test,y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
## So, RandomForestRegressor and XGBRegressor gives us the best accuracy so we will keep these models for Hyperparameter Tunning.
```
from sklearn.model_selection import GridSearchCV # importing GridSearchCVfor hyperparameter
# when use hyperthread, xgboost may become slower
parameters = {'nthread':[4], # Parameters that we want to pass
'objective':['reg:linear'],
'learning_rate': [.03, 0.05, .07], #so called `eta` value
'max_depth': [5, 6, 7],
'min_child_weight': [4],
'silent': [1],
'subsample': [0.7],
'colsample_bytree': [0.7],
'n_estimators': [500]}
grid=GridSearchCV(estimator=xg,param_grid=parameters,cv=5,verbose=3) # initialize the GridSearchCV
grid.fit(x_train_dummy_1.to_numpy(),y_train.to_numpy()) # fitting the GridSearchCV with x_train_dummy_1 and y_train
y_pred=grid.predict(x_test_dummy_1.to_numpy()) # predict the x_test
compute_score(y_test.to_numpy(),y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.model_selection import RandomizedSearchCV # importing RandomizedSearchCV hyperparameter
parameters = { # Parameters that we want to pass
'num_boost_round': [10, 25, 50],
'eta': [0.05, 0.1, 0.3],
'max_depth': [3, 4, 5],
'subsample': [0.9, 1.0],
'colsample_bytree': [0.9, 1.0],
}
random=RandomizedSearchCV(estimator=xg,param_distributions=parameters,cv=5,verbose=3) # initialize the RandomizedSearchCV
random.fit(x_train_dummy_1.to_numpy(),y_train.to_numpy()) # fitting the RandomizedSearchCV with x_train_dummy_1 and y_train
y_pred=random.predict(x_test_dummy_1.to_numpy()) # predict the x_test
compute_score(y_test.to_numpy(),y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
rs=RandomizedSearchCV(estimator=rf,param_distributions=random_grid,cv=5,verbose=3) # initialize the GridSearchCV
rs.fit(x_train_dummy_1.to_numpy(),y_train.to_numpy()) # fitting the GridSearchCV with x_train_dummy_1 and y_train
y_pred=rs.predict(x_test_dummy_1.to_numpy()) # predict the x_test
compute_score(y_test.to_numpy(),y_pred) # computing the r2_score,mean_absolute_error and mean_squared_error
```
| github_jupyter |
1. Recap
==
In the last mission, we explored how to use a simple k-nearest neighbors machine learning model that used just one feature, or attribute, of the listing to predict the rent price. We first relied on the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> column, which describes the number of people a living space can comfortably accommodate. Then, we switched to the <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> column and observed an improvement in accuracy. While these were good features to become familiar with the basics of machine learning, it's clear that using just a single feature to compare listings doesn't reflect the reality of the market. An apartment that can accommodate 4 guests in a popular part of Washington D.C. will rent for much higher than one that can accommodate 4 guests in a crime ridden area.
There are 2 ways we can tweak the model to try to improve the accuracy (decrease the RMSE during validation):
- increase the number of attributes the model uses to calculate similarity when ranking the closest neighbors
- increase <span style="background-color: #F9EBEA; color:##C0392B">k</span>, the number of nearby neighbors the model uses when computing the prediction
In this mission, we'll focus on increasing the number of attributes the model uses. When selecting more attributes to use in the model, we need to watch out for columns that don't work well with the distance equation. This includes columns containing:
- non-numerical values (e.g. city or state)
- Euclidean distance equation expects numerical values
- missing values
- distance equation expects a value for each observation and attribute
- non-ordinal values (e.g. latitude or longitude)
- ranking by Euclidean distance doesn't make sense if all attributes aren't ordinal
In the following code screen, we've read the <span style="background-color: #F9EBEA; color:##C0392B">dc_airbnb.csv</span> dataset from the last mission into pandas and brought over the data cleaning changes we made. Let's first look at the first row's values to identify any columns containing non-numerical or non-ordinal values. In the next screen, we'll drop those columns and then look for missing values in each of the remaining columns.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Use the <span style="background-color: #F9EBEA; color:##C0392B">DataFrame.info()</span> method to return the number of non-null values in each column.
```
import pandas as pd
import numpy as np
np.random.seed(1)
dc_listings = pd.read_csv('dc_airbnb.csv')
dc_listings = dc_listings.loc[np.random.permutation(len(dc_listings))]
stripped_commas = dc_listings['price'].str.replace(',', '')
stripped_dollars = stripped_commas.str.replace('$', '')
dc_listings['price'] = stripped_dollars.astype('float')
dc_listings.info()
```
2. Removing features
==
The following columns contain non-numerical values:
- <span style="background-color: #F9EBEA; color:##C0392B">room_type</span>: e.g. **Private room**
- <span style="background-color: #F9EBEA; color:##C0392B">city</span>: e.g. **Washington**
- <span style="background-color: #F9EBEA; color:##C0392B">state</span>: e.g. **DC**
while these columns contain numerical but non-ordinal values:
- <span style="background-color: #F9EBEA; color:##C0392B">latitude</span>: e.g. **38.913458**
- <span style="background-color: #F9EBEA; color:##C0392B">longitude</span>: e.g. **-77.031**
- <span style="background-color: #F9EBEA; color:##C0392B">zipcode</span>: e.g. **20009**
Geographic values like these aren't ordinal, because a smaller numerical value doesn't directly correspond to a smaller value in a meaningful way. For example, the zip code 20009 isn't smaller or larger than the zip code 75023 and instead both are unique, identifier values. Latitude and longitude value pairs describe a point on a geographic coordinate system and different equations are used in those cases (e.g. [haversine](https://en.wikipedia.org/wiki/Haversine_formula)).
While we could convert the <span style="background-color: #F9EBEA; color:##C0392B">host_response_rate</span> and <span style="background-color: #F9EBEA; color:##C0392B">host_acceptance_rate</span> columns to be numerical (right now they're object data types and contain the <span style="background-color: #F9EBEA; color:##C0392B">%</span> sign), these columns describe the host and not the living space itself. Since a host could have many living spaces and we don't have enough information to uniquely group living spaces to the hosts themselves, let's avoid using any columns that don't directly describe the living space or the listing itself:
- <span style="background-color: #F9EBEA; color:##C0392B">host_response_rate</span>
- <span style="background-color: #F9EBEA; color:##C0392B">host_acceptance_rate</span>
- <span style="background-color: #F9EBEA; color:##C0392B">host_listings_count</span>
Let's remove these 9 columns from the Dataframe
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Remove the 9 columns we discussed above from <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span>:
- 3 containing non-numerical values
- 3 containing numerical but non-ordinal values
- 3 describing the host instead of the living space itself
2. Verify the number of null values of each remain columns
```
import pandas as pd
import numpy as np
np.random.seed(1)
dc_listings = pd.read_csv('dc_airbnb.csv')
dc_listings = dc_listings.loc[np.random.permutation(len(dc_listings))]
stripped_commas = dc_listings['price'].str.replace(',', '')
stripped_dollars = stripped_commas.str.replace('$', '')
dc_listings['price'] = stripped_dollars.astype('float')
columns = ['room_type', 'city', 'state', 'latitude', 'longitude', 'zipcode', 'host_response_rate','host_acceptance_rate','host_listings_count']
dc_listings.drop(columns, inplace=True, axis=1)
dc_listings.info()
```
3. Handling missing values
==
Of the remaining columns, 3 columns have a few missing values (less than 1% of the total number of rows):
- <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">beds</span>
Since the number of rows containing missing values for one of these 3 columns is low, we can select and remove those rows without losing much information. There are also 2 columns have a large number of missing values:
- <span style="background-color: #F9EBEA; color:##C0392B">cleaning_fee</span> - 37.3% of the rows
- <span style="background-color: #F9EBEA; color:##C0392B">security_deposit</span> - 61.7% of the rows
and we can't handle these easily. We can't just remove the rows containing missing values for these 2 columns because we'd miss out on the majority of the observations in the dataset. Instead, let's remove these 2 columns entirely from consideration.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Drop the <span style="background-color: #F9EBEA; color:##C0392B">cleaning_fee</span> and <span style="background-color: #F9EBEA; color:##C0392B">security_deposit</span> columns from <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span>.
2. Then, remove all rows that contain a missing value for the <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>, <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span>, or <span style="background-color: #F9EBEA; color:##C0392B">beds</span> column from <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span>.
- You can accomplish this by using the [Dataframe method dropna()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html) and setting the <span style="background-color: #F9EBEA; color:##C0392B">axis</span> parameter to **0**.
- Since only the <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>, <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> and <span style="background-color: #F9EBEA; color:##C0392B">beds</span> columns contain any missing values, rows containing missing values in these columns will be removed.
3. Display the null value counts for the updated <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span> Dataframe to confirm that there are no missing values left.
```
dc_listings.drop(['cleaning_fee','security_deposit'], inplace=True, axis=1)
dc_listings = dc_listings.replace('', np.nan)
dc_listings.dropna(how='any', inplace=True)
dc_listings.info()
```
4. Normalize columns
==
Here's how the <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span> Dataframe looks like after all the changes we made:
| accommodates | bedrooms | bathrooms | beds | price | minimum_nights | maximum_nights | number_of_reviews |
|--------------|----------|-----------|------|-------|----------------|----------------|-------------------|
| 2 | 1.0 | 1.0 | 1.0 | 125.0 | 1 | 4 | 149 |
| 2 | 1.0 | 1.5 | 1.0 | 85.0 | 1 | 30 | 49 |
| 1 | 1.0 | 0.5 | 1.0 | 50.0 | 1 | 1125 | 1 |
| 2 | 1.0 | 1.0 | 1.0 | 209.0 | 4 | 730 | 2 |
| 12 | 5.0 | 2.0 | 5.0 | 215.0 | 2 | 1825 | 34 |
You may have noticed that while the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span>, <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>, <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span>, <span style="background-color: #F9EBEA; color:##C0392B">beds</span>, and <span style="background-color: #F9EBEA; color:##C0392B">minimum_nights</span> columns hover between 0 and 12 (at least in the first few rows), the values in the <span style="background-color: #F9EBEA; color:##C0392B">maximum_nights</span> and <span style="background-color: #F9EBEA; color:##C0392B">number_of_reviews</span> columns span much larger ranges. For example, the <span style="background-color: #F9EBEA; color:##C0392B">maximum_nights</span> column has values as low as 4 and high as 1825, in the first few rows itself. If we use these 2 columns as part of a k-nearest neighbors model, these attributes could end up having an outsized effect on the distance calculations because of the largeness of the values.
For example, 2 living spaces could be identical across every attribute but be vastly different just on the <span style="background-color: #F9EBEA; color:##C0392B">maximum_nights</span> column. If one listing had a <span style="background-color: #F9EBEA; color:##C0392B">maximum_nights</span> value of 1825 and the other a <span style="background-color: #F9EBEA; color:##C0392B">maximum_nights</span> value of 4, because of the way Euclidean distance is calculated, these listings would be considered very far apart because of the outsized effect the largeness of the values had on the overall Euclidean distance. To prevent any single column from having too much of an impact on the distance, we can **normalize** all of the columns to have a mean of 0 and a standard deviation of 1.
Normalizing the values in each columns to the [standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution) (mean of 0, standard deviation of 1) preserves the distribution of the values in each column while aligning the scales. To normalize the values in a column to the standard normal distribution, you need to:
- from each value, subtract the mean of the column
- divide each value by the standard deviation of the column
Here's the mathematical formula describing the transformation that needs to be applied for all values in a column:
$\displaystyle z= \frac{x − \mu}{\sigma}$
where x is a value in a specific column, $\mu$ is the mean of all the values in the column, and $\sigma$ is the standard deviation of all the values in the column. Here's what the corresponding code, using pandas, looks like:
>```python
# Subtract each value in the column by the mean.
first_transform = dc_listings['maximum_nights'] - dc_listings['maximum_nights'].mean()
# Divide each value in the column by the standard deviation.
normalized_col = first_transform / dc_listings['maximum_nights'].std()
```
To apply this transformation across all of the columns in a Dataframe, you can use the corresponding Dataframe methods mean() and std():
>```python
normalized_listings = (dc_listings - dc_listings.mean()) / (dc_listings.std())
```
These methods were written with mass column transformation in mind and when you call <span style="background-color: #F9EBEA; color:##C0392B">mean()</span> or <span style="background-color: #F9EBEA; color:##C0392B">std()</span>, the appropriate column means and column standard deviations are used for each value in the Dataframe. Let's now normalize all of the feature columns in <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span>.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Normalize all of the feature columns in <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span> and assign the new Dataframe containing just the normalized feature columns to <span style="background-color: #F9EBEA; color:##C0392B">normalized_listings</span>.
2. Add the price column from <span style="background-color: #F9EBEA; color:##C0392B">dc_listings</span> to <span style="background-color: #F9EBEA; color:##C0392B">normalized_listings</span>.
3. Display the first 3 rows in <span style="background-color: #F9EBEA; color:##C0392B">normalized_listings</span>.
```
normalized_listings = (dc_listings - dc_listings.mean()) / (dc_listings.std())
normalized_listings['price'] = dc_listings['price']
normalized_listings.head(3)
```
5. Euclidean distance for multivariate case
==
In the last mission, we trained 2 univariate k-nearest neighbors models. The first one used the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> attribute while the second one used the <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> attribute. Let's now train a model that uses **both** attributes when determining how similar 2 living spaces are. Let's refer to the Euclidean distance equation again to see what the distance calculation using 2 attributes would look like:
$\displaystyle d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + \ldots + (q_n - p_n)^2}$
Since we're using 2 attributes, the distance calculation would look like:
$\displaystyle d = \sqrt{(accommodates_1 - accomodates_2)^2 + (bathrooms_1 - bathrooms_2)^2}$
To find the distance between 2 living spaces, we need to calculate the squared difference between both <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> values, the squared difference between both <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> values, add them together, and then take the square root of the resulting sum. Here's what the Euclidean distance between the first 2 rows in <span style="background-color: #F9EBEA; color:##C0392B">normalized_listings</span> looks like:
<img width="600" alt="creating a repo" src="https://drive.google.com/uc?export=view&id=15uoTMT1rzRLx9T8kIbsOWw7HaTmdBP0o">
So far, we've been calculating Euclidean distance ourselves by writing the logic for the equation ourselves. We can instead use the [distance.euclidean()](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.euclidean.html) function from <span style="background-color: #F9EBEA; color:##C0392B">scipy.spatial</span>, which takes in 2 vectors as the parameters and calculates the Euclidean distance between them. The <span style="background-color: #F9EBEA; color:##C0392B">euclidean()</span> function expects:
- both of the vectors to be represented using a **list-like** object (Python list, NumPy array, or pandas Series)
- both of the vectors must be 1-dimensional and have the same number of elements
Here's a simple example:
>```python
from scipy.spatial import distance
first_listing = [-0.596544, -0.439151]
second_listing = [-0.596544, 0.412923]
dist = distance.euclidean(first_listing, second_listing)
```
Let's use the <span style="background-color: #F9EBEA; color:##C0392B">euclidean()</span> function to calculate the Euclidean distance between 2 rows in our dataset to practice.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Calculate the Euclidean distance using only the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> and <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> features between the first row and fifth row in <span style="background-color: #F9EBEA; color:##C0392B">normalized_listings</span> using the <span style="background-color: #F9EBEA; color:##C0392B">distance.euclidean()</span> function.
2. Assign the distance value to <span style="background-color: #F9EBEA; color:##C0392B">first_fifth_distance</span> and display using the <span style="background-color: #F9EBEA; color:##C0392B">print</span> function.
```
from scipy.spatial import distance
vector1 = normalized_listings[['accommodates','bathrooms']].iloc[0]
vector2 = normalized_listings[['accommodates', 'bathrooms']].iloc[14]
first_fifth_distance = distance.euclidean(vector1, vector2)
print(first_fifth_distance)
```
6. Introduction to scikit-learn
==
So far, we've been writing functions from scratch to train the k-nearest neighbor models. While this is helpful deliberate practice to understand how the mechanics work, you can be more productive and iterate quicker by using a library that handles most of the implementation. In this screen, we'll learn about the [scikit-learn library](http://scikit-learn.org/), which is the most popular machine learning in Python. Scikit-learn contains functions for all of the major machine learning algorithms and a simple, unified workflow. Both of these properties allow data scientists to be incredibly productive when training and testing different models on a new dataset.
The scikit-learn workflow consists of 4 main steps:
- instantiate the specific machine learning model you want to use
- fit the model to the training data
- use the model to make predictions
- evaluate the accuracy of the predictions
We'll focus on the first 3 steps in this screen and the next screen. Each model in scikit-learn is implemented as a [separate class](http://scikit-learn.org/dev/modules/classes.html) and the first step is to identify the class we want to create an instance of. In our case, we want to use the [KNeighborsRegressor class](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor).
Any model that helps us predict numerical values, like listing price in our case, is known as a **regression** model. The other main class of machine learning models is called classification, where we're trying to predict a label from a fixed set of labels (e.g. blood type or gender). The word **regressor** from the class name <span style="background-color: #F9EBEA; color:##C0392B">KNeighborsRegressor</span> refers to the regression model class that we just discussed.
Scikit-learn uses a similar object-oriented style to Matplotlib and you need to instantiate an empty model first by calling the constructor:
>```python
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor()
```
If you refer to the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor), you'll notice that by default:
- <span style="background-color: #F9EBEA; color:##C0392B">n_neighbors:</span> the number of neighbors, is set to **5**
- <span style="background-color: #F9EBEA; color:##C0392B">algorithm:</span> for computing nearest neighbors, is set to **auto**
- <span style="background-color: #F9EBEA; color:##C0392B">p:</span> set to **2**, corresponding to Euclidean distance
Let's set the <span style="background-color: #F9EBEA; color:##C0392B">algorithm</span> parameter to <span style="background-color: #F9EBEA; color:##C0392B">brute</span> and leave the <span style="background-color: #F9EBEA; color:##C0392B">n_neighbors</span> value as **5**, which matches the implementation we wrote in the last mission. If we leave the <span style="background-color: #F9EBEA; color:##C0392B">algorithm</span> parameter set to the default value of <span style="background-color: #F9EBEA; color:##C0392B">auto</span>, scikit-learn will try to use tree-based optimizations to improve performance (which are outside of the scope of this mission):
>```python
knn = KNeighborsRegressor(algorithm='brute')
```
7. Fitting a model and making predictions
==
Now, we can fit the model to the data using the [fit method](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor.fit). For all models, the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method takes in 2 required parameters:
- matrix-like object, containing the feature columns we want to use from the training set.
- list-like object, containing correct target values.
Matrix-like object means that the method is flexible in the input and either a Dataframe or a NumPy 2D array of values is accepted. This means you can select the columns you want to use from the Dataframe and use that as the first parameter to the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method.
If you recall from earlier in the mission, all of the following are acceptable list-like objects:
- NumPy array
- Python list
- pandas Series object (e.g. when selecting a column)
You can select the target column from the Dataframe and use that as the second parameter to the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method:
>```python
# Split full dataset into train and test sets.
train_df = normalized_listings.iloc[0:2792]
test_df = normalized_listings.iloc[2792:]
# Matrix-like object, containing just the 2 columns of interest from training set.
train_features = train_df[['accommodates', 'bathrooms']]
# List-like object, containing just the target column, `price`.
train_target = normalized_listings['price']
# Pass everything into the fit method.
knn.fit(train_features, train_target)
```
When the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method is called, scikit-learn stores the training data we specified within the KNearestNeighbors instance (<span style="background-color: #F9EBEA; color:##C0392B">knn</span>). If you try passing in data containing missing values or non-numerical values into the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method, scikit-learn will return an error. Scikit-learn contains many such features that help prevent us from making common mistakes.
Now that we specified the training data we want used to make predictions, we can use the [predict method](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor.predict) to make predictions on the test set. The <span style="background-color: #F9EBEA; color:##C0392B">predict</span> method has only one required parameter:
- matrix-like object, containing the feature columns from the dataset we want to make predictions on
The number of feature columns you use during both training and testing need to match or scikit-learn will return an error:
>```python
predictions = knn.predict(test_df[['accommodates', 'bathrooms']])
```
The <span style="background-color: #F9EBEA; color:##C0392B">predict()</span> method returns a NumPy array containing the predicted <span style="background-color: #F9EBEA; color:##C0392B">price</span> values for the test set. You now have everything you need to practice the entire scikit-learn workflow.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Create an instance of the [KNeighborsRegressor](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor) class with the following parameters:
- <span style="background-color: #F9EBEA; color:##C0392B">n_neighbors</span>: 5
- <span style="background-color: #F9EBEA; color:##C0392B">algorithm</span>: brute
2. Use the <span style="background-color: #F9EBEA; color:##C0392B">fit</span> method to specify the data we want the k-nearest neighbor model to use. Use the following parameters:
- training data, feature columns: just the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> and <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> columns, in that order, from <span style="background-color: #F9EBEA; color:##C0392B">train_df</span>.
- training data, target column: the <span style="background-color: #F9EBEA; color:##C0392B">price</span> column from <span style="background-color: #F9EBEA; color:##C0392B">train_df</span>.
3. Call the <span style="background-color: #F9EBEA; color:##C0392B">predict</span> method to make predictions on:
- the <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span> and <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span> columns from <span style="background-color: #F9EBEA; color:##C0392B">test_df</span>
- assign the resulting NumPy array of predicted price values to <span style="background-color: #F9EBEA; color:##C0392B">predictions</span>.
```
from sklearn.neighbors import KNeighborsRegressor
train_df = normalized_listings.iloc[0:2792]
test_df = normalized_listings.iloc[2792:]
knn = KNeighborsRegressor(n_neighbors=5, algorithm='brute')
train_features = train_df[['accommodates', 'bathrooms']]
train_target = train_df['price']
knn.fit(train_features, train_target)
predictions = knn.predict(test_df[['accommodates', 'bathrooms']])
print(predictions)
```
8. Calculating MSE using Scikit-Learn
==
Earlier in this mission, we calculated the MSE and RMSE values using the pandas arithmetic operators to compare each predicted value with the actual value from the <span style="background-color: #F9EBEA; color:##C0392B">price</span> column of our test set. Alternatively, we can instead use the [sklearn.metrics.mean_squared_error function()](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error). Once you become familiar with the different machine learning concepts, unifying your workflow using scikit-learn helps save you a lot of time and avoid mistakes.
The <span style="background-color: #F9EBEA; color:##C0392B">mean_squared_error()</span> function takes in 2 inputs:
- list-like object, representing the true values
- list-like object, representing the predicted values using the model
For this function, we won't show any sample code and will leave it to you to understand the function [from the documentation](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error) itself to calculate the MSE and RMSE values for the predictions we just made.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Use the <span style="background-color: #F9EBEA; color:##C0392B">mean_squared_error</span> function to calculate the MSE value for the predictions we made in the previous screen.
2. Assign the MSE value to <span style="background-color: #F9EBEA; color:##C0392B">two_features_mse</span>.
3. Calculate the RMSE value by taking the square root of the MSE value and assign to <span style="background-color: #F9EBEA; color:##C0392B">two_features_rmse</span>.
4. Display both of these error scores using the <span style="background-color: #F9EBEA; color:##C0392B">print</span> function.
```
from sklearn.metrics import mean_squared_error
two_features_mse = mean_squared_error(test_df['price'], predictions)
two_features_rmse = np.sqrt(two_features_mse)
print('MSE two features:',two_features_mse, '\nRMSE two features:',two_features_rmse)
```
9. Using more features
==
Here's a table comparing the MSE and RMSE values for the 2 univariate models from the last mission and the multivariate model we just trained:
| feature(s) | MSE | RMSE |
|-------------------------|---------|-------|
| accommodates | 18646.5 | 136.6 |
| bathrooms | 17333.4 | 131.7 |
| accommodates, bathrooms | 15660.4 | 125.1 |
As you can tell, the model we trained using both features ended up performing better (lower error score) than either of the univariate models from the last mission. Let's now train a model using the following 4 features:
- <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span>
- <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">number_of_reviews</span>
Scikit-learn makes it incredibly easy to swap the columns used during training and testing. We're going to leave this for you as a challenge to train and test a k-nearest neighbors model using these columns instead. Use the code you wrote in the last screen as a guide.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Create a new instance of the [KNeighborsRegressor class](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor) with the following parameters:
- <span style="background-color: #F9EBEA; color:##C0392B">n_neighbors</span>: 5
- <span style="background-color: #F9EBEA; color:##C0392B">algorithm</span>: brute
2. Fit a model that uses the following columns from our training set (**train_df**):
- <span style="background-color: #F9EBEA; color:##C0392B">accommodates</span>
- <span style="background-color: #F9EBEA; color:##C0392B">bedrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">bathrooms</span>
- <span style="background-color: #F9EBEA; color:##C0392B">number_of_reviews</span>
3. Use the model to make predictions on the test set (**test_df**) using the same columns. Assign the NumPy array of predictions to <span style="background-color: #F9EBEA; color:##C0392B">four_predictions</span>.
4. Use the <span style="background-color: #F9EBEA; color:##C0392B">mean_squared_error()</span> function to calculate the MSE value for these predictions by comparing <span style="background-color: #F9EBEA; color:##C0392B">four_predictions</span> with the price column from **test_df**. Assign the computed MSE value to <span style="background-color: #F9EBEA; color:##C0392B">four_mse</span>.
5. Calculate the RMSE value and assign to <span style="background-color: #F9EBEA; color:##C0392B">four_rmse</span>.
6. Display <span style="background-color: #F9EBEA; color:##C0392B">four_mse</span> and <span style="background-color: #F9EBEA; color:##C0392B">four_rmse</span> using the print function.
```
from sklearn.neighbors import KNeighborsRegressor
features = ['accommodates', 'bedrooms', 'bathrooms', 'number_of_reviews']
knn = KNeighborsRegressor(n_neighbors=5, algorithm='brute')
knn.fit(train_df[features], train_df['price'])
four_predictions = knn.predict(test_df[features])
four_mse = mean_squared_error(test_df['price'], four_predictions)
four_rmse = four_mse** (1/2)
print('MSE four features:', four_mse,'\nRMSE four features:', four_rmse)
```
10. Using all features
==
So far so good! As we increased the features the model used, we observed lower MSE and RMSE values:
| feature(s) | MSE | RMSE |
|------------------------------------------------------|---------|-------|
| accommodates | 18646.5 | 136.6 |
| bathrooms | 17333.4 | 131.7 |
| accommodates, bathrooms | 15660.4 | 125.1 |
| accommodates, bathrooms, bedrooms, number_of_reviews | 13320.2 | 115.4 |
Let's take this to the extreme and use all of the potential features. We should expect the error scores to decrease since so far adding more features has helped do so.
<br>
<div class="alert alert-info">
<b>Exercise Start.</b>
</div>
**Description**:
1. Use all of the columns, except for the <span style="background-color: #F9EBEA; color:##C0392B">price</span> column, to train a k-nearest neighbors model using the same parameters for the <span style="background-color: #F9EBEA; color:##C0392B">KNeighborsRegressor</span> class as the ones from the last few screens.
2. Use the model to make predictions on the test set and assign the resulting NumPy array of predictions to <span style="background-color: #F9EBEA; color:##C0392B">all_features_predictions</span>.
3. Calculate the MSE and RMSE values and assign to <span style="background-color: #F9EBEA; color:##C0392B">all_features_mse</span> and <span style="background-color: #F9EBEA; color:##C0392B">all_features_rmse</span> accordingly.
4. Use the **print** function to display both error scores.
```
features = ['accommodates', 'bedrooms', 'bathrooms', 'number_of_reviews', 'minimum_nights','maximum_nights','beds']
knn = KNeighborsRegressor(n_neighbors=5, algorithm='brute')
knn.fit(train_df[features], train_df['price'])
all_features_predictions = knn.predict(test_df[features])
all_features_mse = mean_squared_error(test_df['price'], all_features_predictions)
all_features_rmse = all_features_mse ** (1/2)
print('MSE four features:',all_features_mse, '\nRMSE four features:', all_features_rmse)
```
11. Next steps
==
Interestingly enough, the RMSE value actually increased to **125.1** when we used all of the features available to us. This means that selecting the right features is important and that using more features doesn't automatically improve prediction accuracy. We should re-phrase the lever we mentioned earlier from:
- increase the number of attributes the model uses to calculate similarity when ranking the closest neighbors
to:
- select the relevant attributes the model uses to calculate similarity when ranking the closest neighbors
The process of selecting features to use in a model is known as **feature selection**.
In this mission, we prepared the data to be able to use more features, trained a few models using multiple features, and evaluated the different performance tradeoffs. We explored how using more features doesn't always improve the accuracy of a k-nearest neighbors model. In the next mission, we'll explore another knob for tuning k-nearest neighbor models - the k value.
| github_jupyter |
```
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(gpus[1], 'GPU')
tf.config.experimental.set_virtual_device_configuration(
gpus[1],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=3072),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=3072)])
print(len(gpus))
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(logical_gpus))
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_valid_scaled = scaler.transform(
x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_test_scaled = scaler.transform(
x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
def make_dataset(images, labels, epochs, batch_size, shuffle=True):
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
if shuffle:
dataset = dataset.shuffle(10000)
dataset = dataset.repeat(epochs).batch(batch_size).prefetch(50)
return dataset
batch_size = 128
epochs = 100
train_dataset = make_dataset(x_train_scaled, y_train, epochs, batch_size)
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='relu',
input_shape=(28, 28, 1)))
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='relu'))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='relu'))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer = "sgd",
metrics = ["accuracy"])
model.summary()
history = model.fit(train_dataset,
steps_per_epoch = x_train_scaled.shape[0] // batch_size,
epochs=10)
```
| github_jupyter |
# Problem set 5
### Copied database from /blue/bsc4452/share/Class_Files
```
# Import only the modules needed from sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy import MetaData
from sqlalchemy import Table, Column
from sqlalchemy import Integer, String
from sqlalchemy import sql, select, join, desc
from sqlalchemy import func
# Create a Engine object which is our handle into the database.
engine = create_engine('sqlite:///world.sqlite')
# Connect to the database
conn = engine.connect()
# Read the metadata from the existing database.
# Since the database already exists and has tables defined, we can create Python objects based on these automatically.
DBInfo=MetaData(engine)
```
## Question 1 (5 points):
### What is the country with the latest year (most recent) of independence?
```
# Auto-create the country object basedon the metadata read into the DBInfo.
country=Table('country', DBInfo, autoload=True)
# SELECT Name, IndepYear FROM country ORDER BY IndepYear DESC LIMIT 1;
query=select([country.c.Name, country.c.IndepYear])\
.order_by(desc(country.c.IndepYear))\
.limit(1)
result = conn.execute(query)
# Print results
for row in result:
print(row)
```
## Question 2 (5 points):
### There are several countries that have become independent since the country in your answer to question 1, add one to the database.
```
### Question 2 (5 points):
## Make sure country code is available
# query=select([country.c.Code, country.c.Name]).where(country.c.Code.like('SE%'))
# result = conn.execute(query)
# for row in result:
# print(row)
## Check to see what options there are for filling in data
# print(country.insert())
## Add insert for Serbia
my_insert_serbia=country.insert().values(Code='SER', Name='Serbia', Continent='Europe', Region='Eastern Europe', IndepYear='2006', Population='6963764')
#print(my_insert_serbia)
result = conn.execute(my_insert_serbia)
## Check to make sure it was added
query=select([country.c.Code, country.c.Name, country.c.Continent, country.c.Region, country.c.IndepYear, country.c.Population]).where(country.c.Name.like('Se%'))
result = conn.execute(query)
for row in result:
print(row)
```
## Question 3 (5 points):
#### For the country added in question 2, find 2 cities to add to the cities table of the database.
```
# Auto-create the cities object basedon the metadata read into the DBInfo.
city=Table('city', DBInfo, autoload=True)
## Check to see what options there are for filling in data
#print(city.insert())
## Add insert for Belgrade, Serbia (largest population)
my_insert_belgrade=city.insert().values(Name='Belgrade', CountryCode='SER', Population='1659440')
#print(my_insert_belgrade)
result = conn.execute(my_insert_belgrade)
## Add insert for Crna Trava, Serbia (smallest population)
my_insert_crnatrva=city.insert().values(Name='Crna Trava', CountryCode='SER', Population='1663')
#print(my_insert_crnatrva)
result = conn.execute(my_insert_crnatrva)
## Check to make sure the inserts were added
query=select([city.c.Name, city.c.CountryCode, city.c.Population]).where(city.c.CountryCode.like('SER'))
result = conn.execute(query)
for row in result:
print(row)
```
## Question 4 (5 points):
### Using the LifeExpectancy data in the country table on the y-axis, plot this data against some other value.
```
# Import what is needed
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Get data for dataframe
query=select([country.c.Name, country.c.Continent, country.c.IndepYear, country.c.LifeExpectancy, country.c.Population, country.c.GNP])
result = conn.execute(query)
## Make pandas dataframe
df=pd.read_sql(query, conn, index_col='Name')
print(df)
## Plot
x = df['Continent']
y = df['LifeExpectancy']
plt.bar(x, y)
plt.xlabel("Continent", labelpad=20)
plt.ylabel("LifeExpectancy", labelpad=20)
plt.title("LifeExpectancy vs Continent", y=1.015, fontsize=22);
plt.show()
```
## Grad student extra credit (5 points, sorry undergrads, only question 4 counts as extra credit):
### Plot LifeExpectancy vs the ratio of the total population of all the cities in the country divided by the total population of the country. This is an approximation of the % urban population in the country.
```
## New query for all the cities, country code, and pop
query=select([country.c.Code, country.c.Name, country.c.Population,country.c.LifeExpectancy, city.c.CountryCode, func.sum(city.c.Population)]).group_by(city.c.CountryCode).select_from(country.join(city, city.c.CountryCode == country.c.Code))
result = conn.execute(query)
## Make pandas dataframe
df=pd.read_sql(query, conn, index_col='Name')
## Make new column to get % urban population
UP = df["sum_1"] / df["Population"]
df["UrbanPopulation"] = UP
print(df)
## Plot
x = df['UrbanPopulation']
y = df['LifeExpectancy']
plt.scatter(x, y)
plt.xlabel("%UrbanPopulation", labelpad=20)
plt.ylabel("LifeExpectancy", labelpad=20)
plt.title("LifeExpectancy vs %UrbanPopulation", y=1.015, fontsize=22);
plt.show()
```
| github_jupyter |
```
#convert
```
# babilim.model.layers.roi_ops
> Operations for region of interest extraction.
```
#export
from babilim.core.annotations import RunOnlyOnce
from babilim.core.module_native import ModuleNative
#export
def _convert_boxes_to_roi_format(boxes):
"""
Convert rois into the torchvision format.
:param boxes: The roi boxes as a native tensor[B, K, 4].
:return: The roi boxes in the format that roi pooling and roi align in torchvision require. Native tensor[B*K, 5].
"""
import torch
concat_boxes = boxes.view((-1, 4))
ids = torch.full_like(boxes[:, :, :1], 0)
for i in range(boxes.shape[0]):
ids[i, :, :] = i
ids = ids.view((-1, 1))
rois = torch.cat([ids, concat_boxes], dim=1)
return rois
#export
class RoiPool(ModuleNative):
def __init__(self, output_size, spatial_scale=1.0):
"""
Performs Region of Interest (RoI) Pool operator described in Fast R-CNN.
Creates a callable object, when calling you can use these Arguments:
* **features**: (Tensor[N, C, H, W]) input tensor
* **rois**: (Tensor[N, K, 4]) the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from.
* **return**: (Tensor[N, K, C, output_size[0], output_size[1]]) The feature maps crops corresponding to the input rois.
Parameters to RoiPool constructor:
:param output_size: (Tuple[int, int]) the size of the output after the cropping is performed, as (height, width)
:param spatial_scale: (float) a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
"""
super().__init__()
self.output_size = output_size
self.spatial_scale = spatial_scale
@RunOnlyOnce
def _build_pytorch(self, features, rois):
pass
def _call_pytorch(self, features, rois):
from torchvision.ops import roi_pool as _roi_pool
torchvision_rois = _convert_boxes_to_roi_format(rois)
result = _roi_pool(features, torchvision_rois, self.output_size, self.spatial_scale)
# Fix output shape
N, C, _, _ = features.shape
result = result.view((N, -1, C, self.output_size[0], self.output_size[1]))
return result
@RunOnlyOnce
def _build_tf(self, features, rois):
# TODO implement
raise NotImplementedError()
def _call_tf(self, features, rois):
# TODO implement
raise NotImplementedError()
from babilim.core.tensor import Tensor
import numpy as np
roi = RoiPool(output_size=(7, 4))
tensor = Tensor(data=np.zeros((2,3,24,24), dtype=np.float32), trainable=False)
rois = Tensor(data=np.array([[[0,0,12,12],[4,7,6,23]], [[0,0,12,12], [4,7,6,23]]], dtype=np.float32), trainable=False)
print(rois.shape)
print(tensor.shape)
result = roi(tensor, rois)
print(result.shape)
#export
class RoiAlign(ModuleNative):
def __init__(self, output_size, spatial_scale=1.0):
"""
Performs Region of Interest (RoI) Align operator described in Mask R-CNN.
Creates a callable object, when calling you can use these Arguments:
* **features**: (Tensor[N, C, H, W]) input tensor
* **rois**: (Tensor[N, K, 4]) the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from.
* **return**: (Tensor[N, K, C, output_size[0], output_size[1]]) The feature maps crops corresponding to the input rois.
Parameters to RoiAlign constructor:
:param output_size: (Tuple[int, int]) the size of the output after the cropping is performed, as (height, width)
:param spatial_scale: (float) a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
"""
super().__init__()
self.output_size = output_size
self.spatial_scale = spatial_scale
@RunOnlyOnce
def _build_pytorch(self, features, rois):
pass
def _call_pytorch(self, features, rois):
from torchvision.ops import roi_align as _roi_align
torchvision_rois = _convert_boxes_to_roi_format(rois)
# :param aligned: (bool) If False, use the legacy implementation.
# If True, pixel shift it by -0.5 for align more perfectly about two neighboring pixel indices.
# This version in Detectron2
result = _roi_align(features, torchvision_rois, self.output_size, self.spatial_scale, aligned=True)
# Fix output shape
N, C, _, _ = features.shape
result = result.view((N, -1, C, self.output_size[0], self.output_size[1]))
return result
@RunOnlyOnce
def _build_tf(self, features, rois):
# TODO implement
raise NotImplementedError()
def _call_tf(self, features, rois):
# TODO implement
raise NotImplementedError()
from babilim.core.tensor import Tensor
import numpy as np
roi = RoiAlign(output_size=(7, 4))
tensor = Tensor(data=np.zeros((2,3,24,24), dtype=np.float32), trainable=False)
rois = Tensor(data=np.array([[[0,0,12,12],[4,7,6,23]], [[0,0,12,12], [4,7,6,23]]], dtype=np.float32), trainable=False)
print(rois.shape)
print(tensor.shape)
result = roi(tensor, rois)
print(result.shape)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
import pathlib
sys.path.append(str(pathlib.Path().cwd().parent))
from typing import Tuple
from load_dataset import Dataset
from plotting import plot_ts
dataset = Dataset('../data/dataset/')
```
### В чем заключаются недостатки полносвязных сетей?
* невозможность улавливать временные закономерности в контексте предыдущих точек (архитектурное ограничение)
* фиксированный размер входных данных
* фиксированный размер выходных данных
### Область применимости рекуретных сетей для задачи анализа временных рядов
* большое количество экзогенных признаков, имеющих сложную нелинейную зависимость с целевым рядом
* очень сложная временная структура имеющая наложение разных сезонных и цикличных паттернов
* ряды с часто меняющимся паттерном, или большим количеством аномалий
* когда есть необходимость в нефиксированной длине входных и выходных данных (например многомерные ряды, где для разных компонент хочется предоставить разное количество лагов)
### Особенности подготовки данных - необходима нормализация данных, иначе сеть будет плохо сходиться и медленно обучаться.
```
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
data = np.array(range(0, 100, 10)).reshape(-1, 1)
scaler = MinMaxScaler((0, 1))
scaler.fit(data)
transformed = scaler.transform(data)
transformed
inverse = scaler.inverse_transform(transformed)
inverse
```
### Особенность подготвки данных - обработка последовательностей разной длины.
```
from keras.preprocessing.sequence import pad_sequences
sequences = [
[1, 2, 3, 4],
[3, 4, 5],
[5, 6],
[3]
]
pad_sequences(sequences, padding='pre')
pad_sequences(sequences, padding='post')
pad_sequences(sequences, maxlen=2)
pad_sequences(sequences, maxlen=2, truncating='post')
```
### Какие архитектуры lstm нас интересуют в контексте временных рядов?
* one-to-one - предсказание следующей точки по предыдущей - нет
* one-to-many - предсказание следующих N точeк про предыдущей - нет
* many-to-one - one-step-ahead предсказание - в некоторой степени
* many-to-many - предсказание вектора из следующих m точек по предыдущим n точкам - наибольший интерес
### Простая LSTM сеть
```
from keras.models import Sequential
from keras.layers import LSTM, Dense
ts = dataset['daily-min-temperatures.csv']
ts.plot(figsize=(15, 5))
def transform_into_matrix(ts: pd.Series, num_lags: int) -> Tuple[np.array]:
"""
Transforms time series into lags matrix to allow
applying supervised learning algorithms
Parameters
------------
ts
Time series to transform
num_lags
Number of lags to use
Returns
--------
train, test: np.arrays of shapes (ts-num_lags, num_lags), (num_lags,)
"""
ts_values = ts.values
data = {}
for i in range(num_lags + 1):
data[f'lag_{num_lags - i}'] = np.roll(ts_values, -i)
lags_matrix = pd.DataFrame(data)[:-num_lags]
lags_matrix.index = ts.index[num_lags:]
return lags_matrix.drop('lag_0', axis=1).values, lags_matrix['lag_0'].values
NUM_LAGS = 14
X, y = transform_into_matrix(ts, NUM_LAGS)
X[0]
X = X.reshape((X.shape[0], X.shape[1], 1))
X[0]
split_idx = int(len(X)*0.8)
X_train, X_test = X[:split_idx], X[split_idx:]
y_train, y_test = y[:split_idx], y[split_idx:]
model = Sequential()
model.add(LSTM(50, activation='relu', input_shape=(NUM_LAGS, 1)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit(X, y, epochs=100)
y_pred = model.predict(X_test)
pd.Series(y_test.flatten())[-50:].plot()
pd.Series(y_pred.flatten())[-50:].plot()
### данный результат на самом деле не сильно лучше наивного предсказания
from sklearn.metrics import mean_squared_error as mse
mse(y_test.flatten(), y_pred.flatten())
```
### Stacked LSTM
#### Добавьте дополнительные скрытые слои в сеть (используйте return_sequences=True) и сравните качество
```
model = Sequential()
# your code here
model.compile(optimizer='adam', loss='mse')
model.fit(X_train, y_train, epochs=100, verbose=0)
y_pred = model.predict(X_test)
pd.Series(y_test.flatten())[-50:].plot()
pd.Series(y_pred.flatten())[-50:].plot()
```
### Bidirectional LSTM
#### Сделаем LSTM слой сети Bidirectional при помощи доп слоя Biderectional и сравним качество
```
from keras.layers import Bidirectional
model = Sequential()
# your code here
model.compile(optimizer='adam', loss='mse')
model.fit(X_train, y_train, epochs=10, verbose=0)
y_pred = model.predict(X_test)
```
### Seq2Seq LSTM - когда нужно сделать предсказание на несколько точек вперед
#### Подготовим данные
```
from typing import Tuple
def transform_ts_into_matrix(ts: pd.Series, num_lags_in: int, num_lags_out: int) -> Tuple[np.array, np.array]:
"""
Данная функция должна пройтись скользящим окном по временному ряду и для каждых
num_lags_in точек в качестве признаков собрать num_lags_out следующих точек в качестве таргета.
Вернуть два np.array массива из X_train и y_train соответственно
"""
sequence = ts.values
X, y = list(), list()
i = 0
outer_idx = num_lags_out
while outer_idx < len(sequence):
inner_idx = i + num_lags_in
outer_idx = inner_idx + num_lags_out
X_, y_ = sequence[i:inner_idx], sequence[inner_idx:outer_idx]
X.append(X_)
y.append(y_)
i += 1
return np.array(X), np.array(y)
# получим X и y при помощи предыдущей функции и разбейте на трейн и тест
NUM_LAGS_IN = 28
NUM_LAGS_OUT = 7
X, y = transform_ts_into_matrix(ts, NUM_LAGS_IN, NUM_LAGS_OUT)
X = X.reshape((X.shape[0], X.shape[1], 1))
split_idx = int(len(X)*0.8)
X_train, X_test = X[:split_idx], X[split_idx:]
y_train, y_test = y[:split_idx], y[split_idx:]
# объявим енкодер
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(NUM_LAGS_IN, 1)))
# добавим промежуточный слой, преобразующий выход с енкодера для входного слоя в декодер
from keras.layers import RepeatVector
model.add(RepeatVector(NUM_LAGS_OUT))
# обьявим декодер
model.add(LSTM(50, activation='relu', return_sequences=True))
# обьявим выходной слой - размерность на выходе получается при помощи дополнительного слоя TimeDistributed
from keras.layers import TimeDistributed
model.add(TimeDistributed(Dense(1)))
```
#### Обучим модель и получим предсказание на тесте
```
model.compile(optimizer='adam', loss='mse')
model.fit(X_train, y_train, epochs=10, verbose=0)
y_pred = model.predict(X_test)
```
### Пример с многомерным рядом.
```
ts_multi = pd.read_csv('../data/stability_index.csv', index_col='timestamp', parse_dates=True)
ts_multi.fillna(ts_multi.mean(), axis=0, inplace=True)
def transform_multi_ts_into_matrix(ts: pd.DataFrame, num_lags: int):
"""
Данная функция должна пройтись скользящим окном по временному ряду
и собрать в качестве признаков X np.array размерности (len(ts)-num_lags, n_dims, num_lags),
а в качестве y np.array размерности (len(ts)-num_lags, n_dims),
где n_dims - размерность многомерного ряда.
То есть для всех компонент временного ряда мы должны взять num_lags предыдущих точек каждой компонент
в качестве признаков и все компоненты текущей точки в качестве target
"""
sequence = ts.values
X, y = list(), list()
i = 0
end_i = num_lags
while end_i < len(sequence):
seq_x, seq_y = sequence[i:end_i], sequence[end_i]
X.append(seq_x)
y.append(seq_y)
i += 1
end_i = i + num_lags
return np.array(X), np.array(y)
NUM_LAGS = 14
N_DIMS = ts_multi.shape[1]
X, y = transform_multi_ts_into_matrix(ts_multi, NUM_LAGS)
X[0].shape
# объявим енкодер
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(NUM_LAGS, N_DIMS)))
# добавим промежуточный слой, преобразующий выход с енкодера для входного слоя в декодер
from keras.layers import RepeatVector
model.add(RepeatVector(N_DIMS))
# обьявим декодер
model.add(LSTM(50, activation='relu', return_sequences=True))
# обьявим выходной слой - размерность на выходе получается при помощи дополнительного слоя TimeDistributed
from keras.layers import TimeDistributed
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')
model.fit(X, y, epochs=50)
```
| github_jupyter |
# Milestone2 Document
## Feedback
- Introduction: A nice introduction!
- Background -0.5: It would be hard for users to understand automatic differentiation, computational graph, and evaluation trace if you don't give the corresponding illustrations in the Background section
**Revision: provided a concrete example of evaluation trace and computational graph**
- How to use -0.5: didn't show how the users can get the package from online. Is AutodiffCST the name of a python file or the package? Please give different names to avoid confusion.
**Revision: added instructions for installation, and change the python file name to AD.py**
- Implementation: Using a tree as the core data structure sounds new. It would be better if you could explain it with more details.
**Revision: Changed core data structure to AD object, and updated the implementation part accordingly.**
## Section 1: Introduction
This package autodiffCST implements automatic differentiation. It can be used to automatically differentiate functions via forward mode and reverse mode, depending on the user's choice. It also provides an option of performing second order differentiation.
Differentiation, namely, the process of finding the derivatives of functions, is very prevalent in various areas of science and engineering. It can often be used to find the extrema of functions with single or multiple variables. With the advance of technology, more complicated functions and larger dataset are developed. The difficulty of performing differentiation has greatly increased and we are more dependent on computers to take derivates. Nowadays, we have three major ways of performing differentiation: symbolic, numerical and automatic (algorithmic) differentiation. We will focus on automatic differentiation for the rest of this document.
## Section 2: Background
### 2.1 An Overview of Auto Differentiation
Automatic differentiation (AD) uses algorithms to efficiently and accurately evaluating derivatives of numeric functions. It has the advantage of avoiding symbolic manipulation of functions while reaching an accuracy close to machine precision. Application of automatic differentiation includes but is not limited to astronomy, dynamic systems, numerical analysis research, optimization in finance and engineering.
The idea behind AD is to break down a function into a sequence of elementary operations and functions that have easily attained derivatives, and then sequencially apply the chain rule to evaluate the derivatives of these operations to compute the derivative of the whole function.
The two main methods of performing automatic differentiation are forward mode and reverse mode. Some other AD algorithms implement a combination of forward mode and reverse mode, but this package will implement them seperately.
To better understand automatic differentiation, it is uncessary to get familar with some key concepts that are used in the algorithms of AD. We will use the rest of this section to briefly introduce them.
### 2.2 Elementary operations and functions
The algorithm of automatic differentiation breaks down functions into elementary arithmetic operations and elementary functions. Elementary arithmetic operations include addition, subtraction, multiplication, division and raising power (we can also consider taking roots of a number as raising it to powers less than $1$). Elementary functions include exponential, logrithmatic, and trigonometry. All of these operations and functions mentioned here have derivates that are easy to compute, so we use them as elementary steps in the evaluation trace of AD.
### 2.3 The Chain Rule
The chain rule can be used to calculate the derivate of nested functions, such in the form of $u(v(t))$. For this function, the derivative of $u$ with respect to $t$ is $$\dfrac{\partial u}{\partial t} = \dfrac{\partial u}{\partial v}\dfrac{\partial v}{\partial t}.$$
A more general form of chain rule applies when a function $h$ has several arguments, or when its argument is a vector. Suppose we have $h = h(y(t))$ where $y \in R^n$ and $t \in R^m $. Here, $h$ is the combination of $n$ functions, each of which has $m$ variables. Using the chain rule, the derivative of $h$ with respect to $t$, now called the gradient of $h$, is
$$ \nabla_{t}h = \sum_{i=1}^{n}{\frac{\partial h}{\partial y_{i}}\nabla y_{i}\left(t\right)}.$$
The chain rule enables us to break down complicated and nested functions into layers and operations. Our automatic differentiation algrithm sequencially sues chain rule to compute the derivative of funtions.
### 2.4 Evaluation Trace and Computational Graph
These two concepts are the core of our automatic differentiation algorithm. Since they are so important and can be created at the same time, creating them would be the first thing to do when a function is inputted into the algorithm.
The evaluation trace tracks each layer of operations while evaluate the input function and its derivative. At each step the evaluation trace holds the traces, elementary operations, numerical values, elementary derivatives and partial derivatives.
The computational graph is a graphical visualization of the evaluation trace. It holds the traces and elementary operations of the steps, connecting them via arrows pointing from input to output for each step. The computational graph helps us to better understand the structure of the function and its evaluation trace. Forward mode performs the operations from the start to the end of the graph or evaluation trace. Reverse mode performs the operations backwards, while applying the chain rule at each time determining the derivate of the trace.
Here, we provide an example of a evaluation trace and a computational graph of the function $f(x,y)=exp(−(sin(x)−cos(y))^2)$, with derivatives evaluated at $f(π/2,π/3)$.
Evaluation trace:
|Trace|Elementary Function| Current Value |Elementary Function Derivative| $\nabla_x$ | $\nabla_y$ |
| :---: | :-----------: | :-------: | :-------------: | :----------: | :-----------: |
| $x_{1}$ | $x_{1}$ | $\frac{\pi}{2}$ | $\dot{x}_{1}$ | $1$ | $0$ |
| $y_{1}$ | $y_{1}$ | $\frac{\pi}{3}$ | $\dot{y}_{1}$ | $0$ | $1$ |
| $v_{1}$ | $sin(x_{1})$ | $1$ | $cos(x_{1})\dot{x}_{1}$ | $0$ | $0$ |
| $v_{2}$ | $cos(y_{1})$ | $0.5$ | $-sin(y_{1})\dot{y}_{1}$| $0$ | $-0.866$ |
| $v_{3}$ | $v_{1}-v_{2}$ | $0.5$ | $\dot{v}_{1}-\dot{v}_{2}$| $0$ | $0.866$ |
| $v_{4}$ | $v_{3}^2$ | $0.25$ | $2v_{3}\dot{v}_{3}$ | $0$ | $0.866$ |
| $v_{5}$ | $-v_{4}$ | $-0.25$| $-\dot{v}_{4}$ | $0$ | $-0.866$ |
| $v_{6}$ | $exp(v_{5})$ | $0.779$| $exp(v_{5})\dot{v}_{5}$ | $0$ | $-0.6746$ |
| $f$ | $v_{6}$ | $0.779$| $\dot{v}_{6}$ | $0$ | $-0.6746$ |
Computational graph:

## Section 3: How to Use AutodiffCST
**Installation**
Our package is for Python 3 only. To install AutodiffCST, you need to have pip3 installed first. If you don't, please install pip3 following these instructions https://pip.pypa.io/en/stable/installing/.
Then, you could install this package by running
```pip3 install AutodiffCST``` from the command line.
An alternative is to clone our repository by running ```git clone https://github.com/auto-differentiaters-in-CST/cs107-FinalProject.git``` from the command line and then ```cd <AD directory>```(directory name will be determined later), ```pip install -r requirements.txt```.
**User Guide**
After installation, users could import this package by ```from AutodiffCST import AD``` and ```from autodiffcst import admath```. These two packages would allow the users to do differentiation on functions with most mathematic operations.
Then, they could simply initiate the AD object by giving the point where they wish to differentiate. Moreover, they could also try other supplementary features as in the code demo provided below.
``` python
# import modules
import numpy as np
from AutodiffCST import AD as ad
from autodiffcst import admath as admath
# base case: initialize AD object with scalar values
x = ad(5, tag = "x") # initialize AD object called "x" with the value 5
y = ad(3, tag = "y") # initialize AD object called "y" with the value 3
f = x*y + 1 # build a function with AD objects, the function will also be an AD object
print(f) # print 9.0
dfdx = f1.diff(direction = "x") # returns the derivative with respect to x
print(dfdx) # print 3
jacobian = ad.jacobian(f1) # returns a gradient vector of f
print(jacobian) # print [5,3]
f2 = x + admath.sin(y) # build a function with AD objects
print(f2) # print AD(value: 5.141120008059867, derivatives: {'x': 1, 'y': -0.9899924966004454})
dfdy = f2.diff(direction= = "y") # returns the derivative with respect to x
print(dfdy) # print -0.9899924966004454
jacobian2 = ad.jacobian(f2) # returns a gradient vector of f
print(jacobian2) # print [1, -0.9899924966004454]
# These are the most important features for our forward AD. Would add more later ...
```
## Section 4: Software Organization
The home directory of our software package would be structured as follows.
- LICENSE
- README.md
- requirements.txt
- docs/
* quickstart_tutotial.md
* model_documentation.md
* testing_guidelines.md
* concepts_explanation.md
* references.md
- setup.py
- autodiffcst/
* \_\_init\_\_.py
* AD.py
* admath.py
- tests/
* test_core.py
* test_extension.py
- TravisCI.yml
- CodeCov.yml
Specificly speaking, the README file would contain a general package description and the necessary information for users to navigate in the subdirectories. Besides, we would place our documentation, testing guidelines, a simple tutorial and relative references in the doc directory. Moreover, to package our model with PyPI, we need to include setup.py and a src directory, where stores the source code about our model. Furthermore, we would put a collection of test cases in tests directory. Last but not least, we would include TravisCI.yml and CodeCov.yml in our home directory for integrated test.
In this package, we plan to use the following public modules.
- Modules for mathmatical calculation:
* Numpy: we would use it for matrix operations, and basic math functions and values, such as sin, cos, \pi, e, etc.
- Modules for testing:
* pydoc
* doctest
* Pytest
- Other modules:
* sys
* setuptools: we would use is for publishing our model with PyPI.
To distribute our package, we would use PyPI so that users could easily install the package with *pip install*.
After installing the package, users can use ```from AutodiffCST import AD``` and ```from autodiffcst import admath``` to import the package. These two modules are where the core of this package resides:
* AD: defines the AD object class that we use to perform automatic differentiation and overwrites basic math operation dunder methods for AD. Also provides two core functions to perform on AD: diff() and jacobian().
* admath: defines functions that perform elementary math operations on AD, which include those that cannot be performed by overwriting dunder methods, such as logarithm and trigonometry.
To better organize our software, we plan to use PyScaffold and Sphinx. The former could help us setting up the project while the latter would polish our documentation.
## Section 5: Implementation
Our main data structure is the AD object, which has the attributes of a value, a derivative and a tag. In terms of the classes, our main class is the AD object, and we would probably have several heritaged class for our extensions.
In the AD class, we would have the following methods:
- a constructor
``` python
def __init__(self, val, tags, der=1, mode = "forward"):
self.val = val
if (isinstance(tags, list)) and (isinstance(ders,dict)):
self.tags = tags
self.ders = ders
else:
self.tags = [tags]
self.ders = {tags: ders}
self.mode = mode
```
- overloaded dunder methods as follows:
``` python
__add__
__sub__
__pow__
__mul__
__mod__
__div__
__iadd__
```
  and more basic operations according to https://www.python-course.eu/python3_magic_methods.php
- a diff method, which takes in a direction, and returns the derivative of the function.
``` python
def diff(self, dir = x):
if isinstance(dir, AD):
return self.der[dir]
else:
return 0
```
- a gradient method, which takes in a vector of directions, and returns a vector of the partial derivatives at each direction.
- a jacobian method, which takes in a vector of AD functions and a vector of directions, and returns the jacobian matrix.
In our implementation, we would use some external dependencies such as Numpy and Math. To deal with elementary functions, we would allow users to enter functions that can be recognized by Python, factor a input function to a series of basic operations/functions (such as sin, sqrt, log, and exp) and use if-statements to check functions and return their symbolic derivatives. These operations are handled in admath.py. The functions in admath takes an AD object as input and performs the corresponding operations on the AD objects by updating their values and derivatives.
# Future Features
1. Differentiate a list of functions. Our package now can deal with one function with multiple varaibles. In the future we plan to take a list of functions as input and output its Jacobian accordingly. Using Numpy array as the data structure to keep the Jacobian would be ideal, so we will need to change the implementation of our current jacobian method.
2. Higher order derivatives. A starting point would be allowing second order derivatives taken on our AD objects and returning the correct Jacobian matrix accordingly. Note that this cannot be achieved by simply applying diff() to an AD object twices, since the Jacobian matrix would be different and the datatype would be different. We would need to store the values of the second derivatives of our AD object at each elementary steps in the evaluation trace. Then we would need another function to return the second derivatives (possibly named second_diff()), which functions similarly to diff(), but returns the second derivatives of the AD object. The jacobian() function will also be modified accordingly. It will include an optional input (possibly initialized as second_order = False for defult and second_order = True for second derivatives), which signals that the function will return the Jacobian containing the second order derivatives of the AD object.
Backup extensions:
3. Backward Mode. Right now our mode for doing automatic differetiation is defaulted to forward mode, because we have not implemented backward mode yet. We would need new functions that use the AD object class to implement backward mode. To keep track of the traces, we need to create a trace table, possibly using Numpy array, in the function that runs backward mode.
4. Newton's method. We would like to use our AD package to solve meaningful problems. One way to achieve this is to use it in an implementation of Newton's method. This will be a script that imports our AD package to calculate the derivatives in Newton's method.
# Building Timeline
- Nov.4: Finish M2A and M2B
- Nov.7: Finish basics dunder methods for one variable
- Nov.14: Finish Test Suite
- Nov.19: Submit M2
| github_jupyter |
```
import pandas as pd
from joblib import dump, load
import os
#set up directory
#os.chdir()
#Drug dic
#open file
df_drugs=pd.read_csv(r"C:\Users\mese4\Documents\The Data incubator\project\Drugmap\drugbank vocabulary.csv", encoding='ISO-8859-1')
synonyms = []
drug_names = df_drugs['Common_name'].tolist()
drug_names = [item.lower() for item in drug_names]
#get synonims into a list
for row in df_drugs['Synonyms']:
row=str(row).lower()
words = row.split(' | ')
synonyms.append(words)
#add names to synonims
for x, y in zip(synonyms, drug_names):
x.append(y)
#make tuple list
drug_lists= list(zip(drug_names, synonyms))
#make dict
drug_dic = dict(drug_lists)
#remove 'nan'
drug_dic = {k:[elem for elem in v if elem != 'nan' ] for k,v in drug_dic.items()}
#search engine
keys = [key for key, value in drug_dic.items() if 'Cetuximab' in value]
drug_dic
#Save/open
dump(drug_dic, 'drug_dic.joblib')
drug_dic = load('drug_dic.joblib')
#Gene dic
df_genes=pd.read_csv(r"C:\Users\mese4\Documents\The Data incubator\project\genes_dataset\G-SynMiner_miner-geneHUGO.tsv",sep='\t')
gene_tag = df_genes['symbol'].tolist()
gene_tag = [item.lower() for item in gene_tag]
gene_name = df_genes['name'].tolist()
gene_name = [item.lower() for item in gene_name]
#split synonims into a list
synonyms_gene = []
for row in df_genes['alias_symbol']:
row=str(row).lower()
words = row.split('|')
synonyms_gene.append(words)
#split alias_name into a list
synonyms_alias_name = []
for row in df_genes['alias_name']:
row=str(row).lower()
words = row.split('|')
synonyms_alias_name.append(words)
#split prev_symbol into a list
synonyms_prev_symbol = []
for row in df_genes['prev_symbol']:
row=str(row).lower()
words = row.split('|')
synonyms_prev_symbol.append(words)
#all_combined = list(zip(gene_tag, gene_name, synonyms_gene,synonyms_alias_name,synonyms_prev_symbol ))
#add tags
for x, y in zip(synonyms_gene, gene_tag):
x.append(y)
#add name
for x, y in zip(synonyms_gene, gene_name):
x.append(y)
#add alias_name
for x, y in zip(synonyms_gene, synonyms_alias_name):
x.append(y[0])
#add synonyms_prev_symbol
for x, y in zip(synonyms_gene, synonyms_prev_symbol):
x.append(y[0])
#make tuple list
gene_lists= list(zip(gene_tag, synonyms_gene))
#make dict
gene_dic = dict(gene_lists)
#remove 'nan'
gene_dic = {k:[elem for elem in v if elem != 'nan' ] for k,v in gene_dic.items()}
#search engine
keys = [key for key, value in gene_dic.items() if 'LORSDH' in value]
#save open
dump(gene_dic, 'gene_dic.joblib')
gene_dic = load('gene_dic.joblib')
[key for key, value in gene_dic.items() if 'nrf2' in value]
```
| github_jupyter |
# Introduction to reproducibility and power issues
## Some Definitions
* $H_0$ : null hypothesis: The hypotheis that the effect we are testing for is null
* $H_A$ : alternative hypothesis : Not $H_0$, so there is some signal
* $T$ : The random variable that takes value "significant" or "not significant"
* $T_S$ : Value of T when test is significant (eg $T = T_S$)
* $T_N$ : Value of T when test is not significant (eg $T = T_N$)
* $\alpha$ : false positive rate - probability to reject $H_0$ when $H_0$ is true (therefore $H_A$ is false)
* $\beta$ : false negative rate - probability to accept $H_0$ when $H_A$ is true (i.e. $H_0$ is false)
power = $1-\beta$
where $\beta$ is the risk of *false negative*
So, to compute power, *we need to know what is the risk of false negative*, ie, the risk to not show a significant effect while we have some signal (null is false).
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import scipy.stats as sst
from sympy import symbols, Eq, solve, simplify, lambdify, init_printing, latex
init_printing(use_latex=True, order='old')
from IPython.display import HTML
# Code to make HTML for a probability table
def association_table(assocs, title):
latexed = {'title': title}
for key, value in assocs.items():
latexed[key] = latex(value)
latexed['s_total'] = latex(assocs['t_s'] + assocs['f_s'])
latexed['ns_total'] = latex(assocs['t_ns'] + assocs['f_ns'])
return """<h3>{title}</h3>
<TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$
<TR><TH>$H_A$<TD>${t_s}$<TD>${t_ns}$
<TR><TH>$H_0$<TD>${f_s}$<TD>${f_ns}$
<TR><TH>Total<TD>${s_total}$<TD>${ns_total}$
</TABLE>""".format(**latexed)
from sympy.abc import alpha, beta # get alpha, beta symbolic variables
assoc = dict(t_s = 1 - beta, # H_A true, test significant = true positives
t_ns = beta, # true, not significant = false negatives
f_s = alpha, # false, significant = false positives
f_ns = 1 - alpha) # false, not sigificant = true negatives
HTML(association_table(assoc, 'Not considering prior'))
```
## How do we compute power ?
### What is the effect ?
$$\hspace{3cm}\mu = \mu_1 - \mu_2$$
### What is the standardized effect ? (eg Cohen's d)
$$\hspace{3cm}d = \frac{\mu_1 - \mu_2}{\sigma} = \frac{\mu}{\sigma}$$
### "Z" : Effect accounting for the sample size
$$\hspace{3cm}Z = \frac{\mu}{\sigma / \sqrt{n}}$$
### Cohen's d value:
```
# print some cohen values
# %pylab inline
muse = (.05, .1,.2,.3,.4,.5);
sigmas = np.linspace(1.,.5,len(muse))
cohenstr = ["For sigma = %3.2f and m = %3.2f Cohen d = %3.2f" %(sig,mu,coh)
for (sig,mu,coh) in zip(sigmas,muse, np.asarray(muse)/sigmas)]
for s in cohenstr:
print(s)
```
We have to estimate the effect $\mu$, say under some normal noise. Our statistic will be:
$$
t = \frac{\hat{\mu}}{\hat{\sigma_{\mu}}} = \frac{\hat{\mu}}{\hat{{SE}_{\mu}}}
$$
Power is the probability that the observed t is greater than $t_{.05}$, computing $t_{.05}$ by assuming that we are under the null.
So, we compute $t_{.05}$, and want to compute $P(t > t_{.05})$.
To compute this, __we need the distribution of our measured t - therefore we need to know the signal / effect size !__
Let's assume we know this and call it $t_{nc}$, and $F_{nc}$ for the cumulative distribution (more on this in the appendix).
$\mbox{Power} = 1 - \beta = P(t > t_{.05}) = 1 - F_{nc}(t_{.05})$
__This power will depend on 4 parameters :__
$$ \mbox{The non standardized effect : } \mu$$
$$\mbox{The standard deviation of the data : } \sigma$$
$$\mbox{The number of subjects : } n$$
$$\mbox{The type I risk of error : } \alpha$$
And on the distribution of the statistic under the alternative hypothesis. Here, we assume our original data are normals, and the $t = \frac{\hat{\mu}}{\hat{{SE}_{\mu}}}$ statistics follows a non central t distribution with non centrality parameter
$$\theta = \mu \sqrt{n}/\sigma$$
and $n-1$ degrees of freedom.
```
import scipy.stats as sst
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
# plot power as a function of n : define a little function that
# takes n, mu, sigma, alpha, and report n.
# Optionally plot power as a function of nfrom matplotlib.patches import Polygon
from matplotlib.patches import Polygon
def stat_power(n=16, mu=1., sigma=1., alpha=0.05, plot=False, xlen=500):
"""
This function computes the statistical power of an analysis assuming a normal
distribution of the data with a one sample t-test
Parameters:
-----------
n: int,
The number of sample in the experiment
mu: float
The mean of the alternative
sigma: float
The standard deviation of the alternative
plot: bool
Plot something
alpha: float
The risk of error (type I)
xlen: int
Number of points for the display
Returns:
--------
float
The statistical power for this number of sample, mu, sigma, alpha
"""
df = n-1
theta = np.sqrt(n)*mu/sigma
t_alph_null = sst.t.isf(alpha, df)
ncrv = sst.nct(df, theta)
spow = 1 - ncrv.cdf(t_alph_null)
if plot:
# define the domain of the plot
norv = sst.norm(0, 1.)
bornesnc = ncrv.isf([0.001, .999])
bornesn = norv.isf([0.001, .999])
# because the nc t will have higher max borne, and the H0 normal will be on the left
x = np.linspace(np.min(bornesn), np.max(bornesnc), xlen)
t_line = np.zeros_like(x)
# define the line
x_t_line = np.argmin((x-t_alph_null)**2)
y_t_line = np.max(np.hstack((ncrv.pdf(x), norv.pdf(x))))
t_line[x_t_line] = y_t_line
fig, ax = plt.subplots()
plt.plot(x, ncrv.pdf(x), 'g', x, norv.pdf(x), 'b', x, t_line, 'r')
# Make the shaded region
# http://matplotlib.org/xkcd/examples/showcase/integral_demo.html
a = x[x_t_line]; b = np.max(bornesnc);
ix = np.linspace(a,b)
iy = ncrv.pdf(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
ax.set_xlabel("t-value - H1 centred on " + r"$\theta $" + " = %4.2f; " %theta
+ r"$\mu$" + " = %4.2f" %mu);
ax.set_ylabel("Probability(t)");
ax.set_title('H0 and H1 sampling densities '
+ r'$\beta$' + '= %3.2f' %spow + ' n = %d' %n)
plt.show()
return spow
n = 30
mu = .5
sigma = 1.
pwr = stat_power(n, mu, sigma, plot=True, alpha=0.05, xlen=500)
print ("Power = ", pwr, " Z effect (Non centrality parameter) = ", mu*np.sqrt(n)/sigma)
n = 12
mu = .5
sigma = 1.
pwr = stat_power(n, mu, sigma, plot=True, alpha=0.05, xlen=500)
print("Power = ", pwr, " Z effect (Non centrality parameter): ", mu*np.sqrt(n)/sigma)
```
### Plot power as a function of the number of subject in the study
```
def pwr_funcofsubj(muse, nses, alpha=.05, sigma=1):
"""
muse: array of mu
nses: array of number of subjects
alpha: float, type I risk
sigma: float, data sigma
"""
mstr = [ 'd='+str(m) for m in np.asarray(muse)/sigma]
lines=[]
for mu in (muse):
pw = [stat_power(n, mu, sigma, alpha=alpha, plot=False) for n in nses]
(pl,) = plt.plot(nses, pw)
lines.append(pl)
plt.legend( lines, mstr, loc='upper right', shadow=True)
plt.xlabel(" Number of subjects ")
plt.ylabel(" Power ");
return None
mus = (.05, .1,.2,.3,.4,.5, .6);
#nse = range(70, 770, 20)
nse = range(7, 77, 2)
alph = 1.e-3
pwr_funcofsubj(mus, nse, alph)
```
### **** Here - play with n ****
```
mus = (.05,.1,.2,.3,.4,.5,.6);
nse = range(10, 330, 20)
#nse = range(7, 77, 2)
alph = 0.001
pwr_funcofsubj(mus, nse, alph)
```
### Here - play with $\alpha$
```
mus = (.05, .1,.2,.3,.4,.5, .6);
nse = range(10, 770, 20)
#nse = range(7, 77, 2)
alph = 0.05/30000
pwr_funcofsubj(mus, nse, alph)
```
### What is the effect size of APOE on the hippocampal volume ?
Authors find p value of 6.63e-10
They had 733 subjects
```
n01 = sst.norm(0,1.)
z = n01.isf(6.6311e-10)
d = n01.isf(6.6311e-10)/np.sqrt(733)
print("z = %4.3f d = %4.3f " %(z,d))
```
| github_jupyter |
```
import os
import torch
import numpy as np
import pickle
import matplotlib.pyplot as plt
from torch.optim.lr_scheduler import LambdaLR, StepLR
#@title
import gzip
import html
import os
from functools import lru_cache
import ftfy
import regex as re
@lru_cache()
def bytes_to_unicode():
"""
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
This is a signficant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
"""
bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8+n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
"""Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path: str = "clip/bpe_simple_vocab_16e6.txt.gz"):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
merges = merges[1:49152-256-2+1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v+'</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + ( token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token+'</w>'
while True:
bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word)-1 and word[i+1] == second:
new_word.append(first+second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens):
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
# from clip import clip
# from clip import model
# model, preprocess = clip.load("ViT-B/32", device='cuda', jit=False)
model = torch.jit.load("../checkpoints/model.pt").cuda().eval()
input_resolution = model.input_resolution.item()
context_length = model.context_length.item()
vocab_size = model.vocab_size.item()
print("Model parameters:", f"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}")
print("Input resolution:", input_resolution)
print("Context length:", context_length)
print("Vocab size:", vocab_size)
```
# Step 1: Load LAD Dataset
## Option 1 Load Dataset from Scratch
```
file_root = '/media/hxd/82231ee6-d2b3-4b78-b3b4-69033720d8a8/MyDatasets/LAD'
data_root = file_root + '/LAD_annotations/'
img_root = file_root + '/LAD_images/'
# load attributes list
attributes_list_path = data_root + 'attribute_list.txt'
fsplit = open(attributes_list_path, 'r', encoding='UTF-8')
lines_attribute = fsplit.readlines()
fsplit.close()
list_attribute = list()
list_attribute_value = list()
for each in lines_attribute:
tokens = each.split(', ')
list_attribute.append(tokens[0])
list_attribute_value.append(tokens[1])
# load label list
label_list_path = data_root + 'label_list.txt'
fsplit = open(label_list_path, 'r', encoding='UTF-8')
lines_label = fsplit.readlines()
fsplit.close()
list_label = dict()
list_label_value = list()
for each in lines_label:
tokens = each.split(', ')
list_label[tokens[0]]=tokens[1]
list_label_value.append(tokens[1])
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
from PIL import Image
preprocess = Compose([
Resize((224, 224), interpolation=Image.BICUBIC),
CenterCrop((224, 224)),
ToTensor()
])
image_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073]).cuda()
image_std = torch.tensor([0.26862954, 0.26130258, 0.27577711]).cuda()
# load all the labels, attributes, images data from the LAD dataset
attributes_per_class_path = data_root + 'attributes.txt'
fattr = open(attributes_per_class_path, 'r', encoding='UTF-8')
lines_attr = fattr.readlines()
fattr.close()
images = list()
attr = list()
labels = list()
for each in lines_attr:
tokens = each.split(', ')
labels.append(list_label[tokens[0]])
img_path = tokens[1]
image = preprocess(Image.open(os.path.join(img_root, img_path)).convert("RGB"))
images.append(image)
attr_r = list(map(int, tokens[2].split()[1:-1]))
attr.append([val for i,val in enumerate(list_attribute_value) if attr_r[i] == 1])
# Dump processed image and text to local
with open('../checkpoints/data_img_raw.pkl', 'wb') as file:
pickle.dump(images, file)
with open('../checkpoints/data_txt_raw.pkl', 'wb') as file:
pickle.dump({'label': labels, 'att': attr}, file)
```
## Option 2 Load LAD Dataset from Saved Files
```
with open('../checkpoints/data_img_raw.pkl', 'rb') as file:
images = pickle.load(file)
with open('../checkpoints/data_txt_raw.pkl', 'rb') as file:
b = pickle.load(file)
labels = b['label']
attr = b['att']
```
# Step 2: Obtain the Image and Text Features
## Option 1 Load CLIP to obtain features
```
# normalize images
image_input = torch.tensor(np.stack(images)).cuda()
image_input -= image_mean[:, None, None]
image_input /= image_std[:, None, None]
# Convert labels to tokens
tokenizer_label = SimpleTokenizer()
text_tokens = [tokenizer_label.encode(desc) for desc in labels]
sot_token = tokenizer_label.encoder['<|startoftext|>']
eot_token = tokenizer_label.encoder['<|endoftext|>']
text_inputs_label = torch.zeros(len(text_tokens), model.context_length, dtype=torch.long)
for i, tokens in enumerate(text_tokens):
tokens = [sot_token] + tokens + [eot_token]
text_inputs_label[i, :len(tokens)] = torch.tensor(tokens)
text_inputs_label = text_inputs_label.cuda()
# Convert attributes to tokens
tokenizer_att = SimpleTokenizer()
text_tokens = [[tokenizer_att.encode(desc) for desc in att] for att in attr]
sot_token = tokenizer_att.encoder['<|startoftext|>']
eot_token = tokenizer_att.encoder['<|endoftext|>']
text_inputs_att = list()
for j, tokens_img in enumerate(text_tokens):
text_input = torch.zeros(len(tokens_img), model.context_length, dtype=torch.long)
for i, tokens in enumerate(tokens_img):
tokens = [sot_token] + tokens + [eot_token]
text_input[i, :len(tokens)] = torch.tensor(tokens)
text_inputs_att.append(text_input.cuda())
# Load CLIP model
with torch.no_grad():
image_features = model.encode_image(image_input).float()
with torch.no_grad():
label_fea = model.encode_text(text_inputs_label.cuda()).float()
with torch.no_grad():
text_feature = list()
for txt in text_inputs_att:
if len(txt) == 0:
text_feature.append(torch.empty(0, 512).cuda())
else:
text_feature.append(model.encode_text(txt).float())
image_features /= image_features.norm(dim=-1, keepdim=True)
label_fea /= label_fea.norm(dim=-1, keepdim=True)
text_feature = torch.stack([torch.mean(item,0) for item in text_feature])
text_feature /= text_feature.norm(dim=-1, keepdim=True)
# Save image and text features
with open('../checkpoints/data_txt_feature.pkl', 'wb') as file:
pickle.dump({'label': label_fea, 'att': text_feature}, file)
with open('../checkpoints/data_img_feature.pkl', 'wb') as file:
pickle.dump(image_features, file)
```
# Option 2 Load saved image and text features
```
with open('../checkpoints/data_txt_feature.pkl', 'rb') as file:
b = pickle.load(file)
label_fea = b['label']
text_feature = b['att']
with open('../checkpoints/data_img_feature.pkl', 'rb') as file:
image_features = pickle.load(file)
```
# Construct the dataloader for classification
```
from torch.utils.data import Dataset
from sklearn import preprocessing
class Dataset(Dataset):
def __init__(self, image_features, text_feature, labels, data_indx):
self.image_features = image_features
self.text_feature = text_feature
self.labels = labels
self.data_indx = data_indx
# self.imgs = image_input
# self.attr = attr
def __len__(self):
return len(self.image_features)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
sample = {'image': self.image_features[idx],
'attribute': self.text_feature[idx],
'label': self.labels[idx],
'data_indx': self.data_indx[idx]
# 'imgs': self.imgs[idx],
# 'attr': self.attr[idx]
}
return sample
le = preprocessing.LabelEncoder()
le.fit(labels)
class_list = list(le.classes_)
labels_list = torch.tensor(le.transform(labels)).cuda()
attr_ = [';'.join(attr[0]) for item in attr]
data_indx = list(range(4600))
# dataset = Dataset(image_features, text_feature, labels_list, torch.tensor(np.stack(images)).cuda(), attr_)
dataset = Dataset(image_features, text_feature, labels_list, data_indx)
train_set, test_set = torch.utils.data.random_split(dataset,[4600-500,500])
trainloader = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_set, batch_size=64, shuffle=True)
import torch.nn as nn
from torch.utils.data import DataLoader
# defining the model architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear_layers = nn.Sequential(
nn.Linear(1024, 512),
nn.Linear(512, 230)
)
# Defining the forward pass
def forward(self, x, t):
con = torch.cat((x, t), 1)
out = self.linear_layers(con)
return out
model = Net().cuda()
error = nn.CrossEntropyLoss().cuda()
learning_rate = 0.01
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
scheduler = StepLR(optimizer, step_size=10, gamma=0.1)
num_epochs = 30
# Lists for visualization of loss and accuracy
epoch_list = []
train_accuracy_list = []
train_loss_list = []
valid_accuracy_list = []
valid_loss_list = []
PATH = "../checkpoints/cnn.pth"
for epoch in range(num_epochs):
correct = 0
running_loss = 0
model.train()
for data in trainloader:
# Transfering images and labels to GPU if available
# image_batch, text_batch, label_batch, im_batch, att_batch = data['image'], data['attribute'], data['label'], data['imgs'], data['attr']
image_batch, text_batch, label_batch, idx_batch = data['image'], data['attribute'], data['label'], data['data_indx']
# Forward pass
outputs = model(image_batch, text_batch)
#CrossEntropyLoss expects floating point inputs and long labels.
loss = error(outputs, label_batch)
# Initializing a gradient as 0 so there is no mixing of gradient among the batches
optimizer.zero_grad()
#Propagating the error backward
loss.backward()
# Optimizing the parameters
optimizer.step()
predictions = torch.max(outputs, 1)[1].cuda()
correct += (predictions == label_batch).sum()
running_loss += loss.item()
train_loss_list.append(float(running_loss) / float(len(trainloader.dataset)))
train_accuracy_list.append(float(correct) / float(len(trainloader.dataset)))
# test on validation set
correct = 0
running_loss = 0
with torch.no_grad():
for data in testloader:
image_batch, text_batch, label_batch, idx_batch = data['image'], data['attribute'], data['label'], data['data_indx']
outputs = model(image_batch, text_batch)
predictions = torch.max(outputs, 1)[1].cuda()
correct += (predictions == label_batch).sum()
running_loss += loss.item()
valid_loss_list.append(float(running_loss) / float(len(testloader.dataset)))
valid_accuracy_list.append(float(correct) / float(len(testloader.dataset)))
print("Epoch: {}, train_loss: {}, train_accuracy: {}%, test_loss: {}, test_accuracy: {}%".format(epoch,
train_loss_list[-1],
train_accuracy_list[-1],
valid_loss_list[-1],
valid_accuracy_list[-1]))
epoch_list.append(epoch)
scheduler.step()
if (epoch % 10) == 0:
torch.save({
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, PATH)
m = nn.Softmax()
data = next(iter(testloader))
image_batch, text_batch, label_batch, idx_batch = data['image'], data['attribute'], data['label'], data['data_indx']
outputs = model(image_batch, text_batch)
for id in range(64):
plt.imshow(images[idx_batch[id]].cpu().detach().permute(1, 2, 0))
plt.show()
print(m(outputs[id]).cpu().topk(3, dim=-1))
top3 = m(outputs[id]).cpu().topk(3, dim=-1).indices
print([class_list[i] for i in top3])
print(attr[idx_batch[id]])
```
| github_jupyter |
## Libraries
```
import pandas as pd
import numpy as np
import scipy.stats as stat
from math import sqrt
from mlgear.utils import show, display_columns
from surveyweights import normalize_weights, run_weighting_iteration
def margin_of_error(n=None, sd=None, p=None, type='proportion', interval_size=0.95):
z_lookup = {0.8: 1.28, 0.85: 1.44, 0.9: 1.65, 0.95: 1.96, 0.99: 2.58}
if interval_size not in z_lookup.keys():
raise ValueError('{} not a valid `interval_size` - must be {}'.format(interval_size,
', '.join(list(z_lookup.keys()))))
if type == 'proportion':
se = sqrt(p * (1 - p)) / sqrt(n)
elif type == 'continuous':
se = sd / sqrt(n)
else:
raise ValueError('{} not a valid `type` - must be proportion or continuous')
z = z_lookup[interval_size]
return se * z
def print_pct(pct, digits=0):
pct = pct * 100
pct = np.round(pct, digits)
if pct >= 100:
if digits == 0:
val = '>99.0%'
else:
val = '>99.'
for d in range(digits - 1):
val += '9'
val += '9%'
elif pct <= 0:
if digits == 0:
val = '<0.1%'
else:
val = '<0.'
for d in range(digits - 1):
val += '0'
val += '1%'
else:
val = '{}%'.format(pct)
return val
def calc_result(biden_vote, trump_vote, n, interval=0.8):
GENERAL_POLLING_ERROR = 5.0
N_SIMS = 100000
biden_moe = margin_of_error(n=n, p=biden_vote/100, interval_size=interval)
trump_moe = margin_of_error(n=n, p=trump_vote/100, interval_size=interval)
undecided = (100 - biden_vote - trump_vote) / 2
biden_mean = biden_vote + undecided * 0.25
biden_raw_moe = biden_moe * 100
biden_allocate_undecided = undecided * 0.4
biden_margin = biden_raw_moe + biden_allocate_undecided + GENERAL_POLLING_ERROR
trump_mean = trump_vote + undecided * 0.25
trump_raw_moe = trump_moe * 100
trump_allocate_undecided = undecided * 0.4
trump_margin = trump_raw_moe + trump_allocate_undecided + GENERAL_POLLING_ERROR
cdf_value = 0.5 + 0.5 * interval
normed_sigma = stat.norm.ppf(cdf_value)
biden_sigma = biden_margin / 100 / normed_sigma
biden_sims = np.random.normal(biden_mean / 100, biden_sigma, N_SIMS)
trump_sigma = trump_margin / 100 / normed_sigma
trump_sims = np.random.normal(trump_mean / 100, trump_sigma, N_SIMS)
chance_pass = np.sum([sim[0] > sim[1] for sim in zip(biden_sims, trump_sims)]) / N_SIMS
low, high = np.percentile(biden_sims - trump_sims, [20, 80]) * 100
return {'mean': biden_mean - trump_mean, 'high': high, 'low': low, 'n': n,
'raw_moe': biden_raw_moe + trump_raw_moe,
'margin': (biden_margin + trump_margin) / 2,
'sigma': (biden_sigma + trump_sigma) / 2,
'chance_pass': chance_pass}
def print_result(mean, high, low, n, raw_moe, margin, sigma, chance_pass):
mean = np.round(mean, 1)
first = np.round(high, 1)
second = np.round(low, 1)
sigma = np.round(sigma * 100, 1)
raw_moe = np.round(raw_moe, 1)
margin = np.round(margin, 1)
chance_pass = print_pct(chance_pass, 1)
if second < first:
_ = first
first = second
second = _
if second > 100:
second = 100
if first < -100:
first = -100
print(('Result Biden {} (80% CI: {} to {}) (Weighted N={}) (raw_moe={}pts, margin={}pts, '
'sigma={}pts) (Biden {} likely to win)').format(mean,
first,
second,
n,
raw_moe,
margin,
sigma,
chance_pass))
print(('Biden {} (80% CI: {} to {}) ({} Biden)').format(mean,
first,
second,
chance_pass))
print('-')
def calc_result_sen(dem_vote, rep_vote, n, interval=0.8):
GENERAL_POLLING_ERROR = 5.0
N_SIMS = 100000
dem_moe = margin_of_error(n=n, p=dem_vote/100, interval_size=interval)
rep_moe = margin_of_error(n=n, p=rep_vote/100, interval_size=interval)
undecided = 100 - dem_vote - rep_vote
dem_mean = dem_vote + undecided * 0.25
dem_raw_moe = dem_moe * 100
dem_allocate_undecided = undecided * 0.4
dem_margin = dem_raw_moe + dem_allocate_undecided + GENERAL_POLLING_ERROR
rep_mean = rep_vote + undecided * 0.25
rep_raw_moe = rep_moe * 100
rep_allocate_undecided = undecided * 0.4
rep_margin = rep_raw_moe + rep_allocate_undecided + GENERAL_POLLING_ERROR
cdf_value = 0.5 + 0.5 * interval
normed_sigma = stat.norm.ppf(cdf_value)
dem_sigma = dem_margin / 100 / normed_sigma
dem_sims = np.random.normal(dem_mean / 100, dem_sigma, N_SIMS)
rep_sigma = rep_margin / 100 / normed_sigma
rep_sims = np.random.normal(rep_mean / 100, rep_sigma, N_SIMS)
chance_pass = np.sum([sim[0] > sim[1] for sim in zip(dem_sims, rep_sims)]) / N_SIMS
low, high = np.percentile(dem_sims - rep_sims, [20, 80]) * 100
return {'mean': dem_mean - rep_mean, 'high': high, 'low': low, 'n': n,
'raw_moe': dem_raw_moe + rep_raw_moe,
'margin': (dem_margin + rep_margin) / 2,
'sigma': (dem_sigma + rep_sigma) / 2,
'chance_pass': chance_pass}
def print_result_sen(mean, high, low, n, raw_moe, margin, sigma, chance_pass):
mean = np.round(mean, 1)
first = np.round(high, 1)
second = np.round(low, 1)
sigma = np.round(sigma * 100, 1)
raw_moe = np.round(raw_moe, 1)
margin = np.round(margin, 1)
chance_pass = print_pct(chance_pass, 1)
if second < first:
_ = first
first = second
second = _
if second > 100:
second = 100
if first < -100:
first = -100
print(('Result Dem Sen {} (80% CI: {} to {}) (Weighted N={}) (raw_moe={}pts, margin={}pts, '
'sigma={}pts) (Dem Sen {} likely to win)').format(mean,
first,
second,
n,
raw_moe,
margin,
sigma,
chance_pass))
print(('Dem {} (80% CI: {} to {}) ({} Dem)').format(mean,
first,
second,
chance_pass))
print('-')
```
## Load Processed Data
```
survey = pd.read_csv('responses_processed_national_weighted.csv').fillna('Not presented')
```
## State Presidential Models
```
POTUS_CENSUS = {'Alabama': {'Hillary Clinton': 0.3436, 'Donald Trump': 0.6208},
'Alaska': {'Hillary Clinton': 0.3655, 'Donald Trump': 0.5128},
'Arizona': {'Hillary Clinton': 0.4513, 'Donald Trump': 0.4867},
'Arkansas': {'Hillary Clinton': 0.3365, 'Donald Trump': 0.6057},
'California': {'Hillary Clinton': 0.6173, 'Donald Trump': 0.3162},
'Colorado': {'Hillary Clinton': 0.4816, 'Donald Trump': 0.4325},
'Connecticut': {'Hillary Clinton': 0.5457, 'Donald Trump': 0.4093},
'Delaware': {'Hillary Clinton': 0.531, 'Donald Trump': 0.417},
'Washington DC': {'Hillary Clinton': 0.905, 'Donald Trump': 0.016},
'Florida': {'Hillary Clinton': 0.478, 'Donald Trump': 0.490},
'Georgia': {'Hillary Clinton': 0.456, 'Donald Trump': 0.508},
'Hawaii': {'Hillary Clinton': 0.622, 'Donald Trump': 0.300},
'Idaho': {'Hillary Clinton': 0.275, 'Donald Trump': 0.593},
'Illinois': {'Hillary Clinton': 0.558, 'Donald Trump': 0.379},
'Indiana': {'Hillary Clinton': 0.379, 'Donald Trump': 0.511},
'Iowa': {'Hillary Clinton': 0.417, 'Donald Trump': 0.512},
'Kansas': {'Hillary Clinton': 0.361, 'Donald Trump': 0.567},
'Kentucky': {'Hillary Clinton': 0.327, 'Donald Trump': 0.625},
'Louisiana': {'Hillary Clinton': 0.385, 'Donald Trump': 0.581},
'Maine': {'Hillary Clinton': 0.478, 'Donald Trump': 0.449},
'Maryland': {'Hillary Clinton': 0.603, 'Donald Trump': 0.339},
'Massachusetts': {'Hillary Clinton': 0.600, 'Donald Trump': 0.328},
'Michigan': {'Hillary Clinton': 0.473, 'Donald Trump': 0.475},
'Minnesota': {'Hillary Clinton': 0.464, 'Donald Trump': 0.449},
'Mississippi': {'Hillary Clinton': 0.401, 'Donald Trump': 0.579},
'Missouri': {'Hillary Clinton': 0.401, 'Donald Trump': 0.579},
'Montana': {'Hillary Clinton': 0.381, 'Donald Trump': 0.562},
'Nebraska': {'Hillary Clinton': 0.337, 'Donald Trump': 0.588},
'Nevada': {'Hillary Clinton': 0.479, 'Donald Trump': 0.455},
'New Hampshire': {'Hillary Clinton': 0.470, 'Donald Trump': 0.466},
'New Jersey': {'Hillary Clinton': 0.555, 'Donald Trump': 0.414},
'New Mexico': {'Hillary Clinton': 0.483, 'Donald Trump': 0.404},
'New York': {'Hillary Clinton': 0.590, 'Donald Trump': 0.365},
'North Carolina': {'Hillary Clinton': 0.462, 'Donald Trump': 0.498},
'North Dakota': {'Hillary Clinton': 0.272, 'Donald Trump': 0.630},
'Ohio': {'Hillary Clinton': 0.436, 'Donald Trump': 0.517},
'Oklahoma': {'Hillary Clinton': 0.289, 'Donald Trump': 0.653},
'Oregon': {'Hillary Clinton': 0.501, 'Donald Trump': 0.391},
'Pennsylvania': {'Hillary Clinton': 0.475, 'Donald Trump': 0.481},
'Rhode Island': {'Hillary Clinton': 0.544, 'Donald Trump': 0.389},
'South Carolina': {'Hillary Clinton': 0.407, 'Donald Trump': 0.549},
'South Dakota': {'Hillary Clinton': 0.317, 'Donald Trump': 0.615},
'Tennessee': {'Hillary Clinton': 0.347, 'Donald Trump': 0.607},
'Texas': {'Hillary Clinton': 0.432, 'Donald Trump': 0.522},
'Utah': {'Hillary Clinton': 0.275, 'Donald Trump': 0.454},
'Vermont': {'Hillary Clinton': 0.567, 'Donald Trump': 0.303},
'Virginia': {'Hillary Clinton': 0.497, 'Donald Trump': 0.444},
'Washington': {'Hillary Clinton': 0.525, 'Donald Trump': 0.368},
'West Virginia': {'Hillary Clinton': 0.264, 'Donald Trump': 0.685},
'Wisconsin': {'Hillary Clinton': 0.465, 'Donald Trump': 0.472},
'Wyoming': {'Hillary Clinton': 0.216, 'Donald Trump': 0.674 }}
for state in POTUS_CENSUS.keys():
print('## {} ##'.format(state.upper()))
state_survey = survey.copy()
potus_census = {'vote2016': POTUS_CENSUS[state].copy()}
potus_census['vote2016']['Other'] = 1 - potus_census['vote2016']['Hillary Clinton'] - potus_census['vote2016']['Donald Trump']
output = run_weighting_iteration(state_survey, census=potus_census, weigh_on=['vote2016'], verbose=0)
potus_weights = output['weights']['vote2016']
potus_weights = state_survey['vote2016'].astype(str).replace(potus_weights)
state_survey['weight'] = normalize_weights(state_survey['weight'] * potus_weights)
state_survey['lv_weight'] = normalize_weights(state_survey['weight'] * state_survey['lv_index'])
options = ['Donald Trump', 'Hillary Clinton', 'Other']
survey_ = state_survey.loc[state_survey['vote2016'].isin(options)].copy()
survey_['weight'] = normalize_weights(survey_['weight'])
survey_['rv_weight'] = normalize_weights(survey_['rv_weight'])
survey_['lv_weight'] = normalize_weights(survey_['lv_weight'])
lv_weighted_n = int(np.round(survey_['lv_weight'].apply(lambda w: 1 if w > 1 else w).sum()))
votes = survey_['vote2016'].value_counts(normalize=True) * survey_.groupby('vote2016')['lv_weight'].mean() * 100
votes = votes[options] * (100 / votes[options].sum())
raw_result = potus_census['vote2016']['Hillary Clinton'] - potus_census['vote2016']['Donald Trump']
print('Raw result: {}'.format(np.round(raw_result * 100, 1)))
print(votes)
options = ['Joe Biden, the Democrat', 'Donald Trump, the Republican', 'Another candidate', 'Not decided']
survey_ = state_survey.loc[state_survey['vote_trump_biden'].isin(options)].copy()
survey_['weight'] = normalize_weights(survey_['weight'])
survey_['lv_weight'] = normalize_weights(survey_['lv_weight'])
votes = survey_['vote_trump_biden'].value_counts(normalize=True) * survey_.groupby('vote_trump_biden')['lv_weight'].mean() * 100
votes = votes[options] * (100 / votes[options].sum())
print(votes)
print('-')
print_result(**calc_result(biden_vote=votes['Joe Biden, the Democrat'],
trump_vote=votes['Donald Trump, the Republican'],
n=lv_weighted_n))
print('-')
```
## State Models (Alt Weights, Post-Hoc)
```
for state in POTUS_CENSUS.keys():
print('## {} ##'.format(state.upper()))
state_survey = survey.copy()
potus_census = {'vote2016': POTUS_CENSUS[state].copy()}
potus_census['vote2016']['Other'] = 1 - potus_census['vote2016']['Hillary Clinton'] - potus_census['vote2016']['Donald Trump']
output = run_weighting_iteration(state_survey, census=potus_census, weigh_on=['vote2016'], verbose=0)
potus_weights = output['weights']['vote2016']
potus_weights = state_survey['vote2016'].astype(str).replace(potus_weights)
state_survey['weight'] = normalize_weights(state_survey['weight'] * potus_weights)
state_survey['lv_weight'] = normalize_weights(state_survey['weight'] * state_survey['lv_index'])
state_survey['lv_weight_alt'] = state_survey['lv_weight']
state_survey.loc[(~state_survey['voted2016']) & (state_survey['vote_trump_biden'] == 'Donald Trump, the Republican'), 'lv_weight_alt'] *= 1.662
state_survey['lv_weight_alt'] = normalize_weights(state_survey['lv_weight_alt'])
options = ['Joe Biden, the Democrat', 'Donald Trump, the Republican', 'Another candidate', 'Not decided']
survey_ = state_survey.loc[state_survey['vote_trump_biden'].isin(options)].copy()
survey_['lv_weight_alt'] = normalize_weights(survey_['lv_weight_alt'])
votes = survey_['vote_trump_biden'].value_counts(normalize=True) * survey_.groupby('vote_trump_biden')['lv_weight_alt'].mean() * 100
votes = votes[options] * (100 / votes[options].sum())
print(votes)
print('-')
print_result(**calc_result(biden_vote=votes['Joe Biden, the Democrat'],
trump_vote=votes['Donald Trump, the Republican'],
n=lv_weighted_n))
print('-')
```
## Senate Models
```
SENATE_STATES = ['Alabama', 'Alaska', 'Arizona', 'Arkansas', 'Colorado', 'Delaware', 'Georgia',
'Idaho', 'Illinois', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Maine',
'Massachusetts', 'Michigan', 'Minnesota', 'Mississippi', 'Montana', 'Nebraska',
'New Hampshire', 'New Jersey', 'New Mexico', 'North Carolina', 'Oklahoma',
'Oregon', 'Rhode Island', 'South Carolina', 'South Dakota', 'Tennessee',
'Texas', 'Virginia', 'West Virginia', 'Wyoming']
for state in SENATE_STATES:
print('## {} ##'.format(state.upper()))
state_survey = survey.copy()
potus_census = {'vote2016': POTUS_CENSUS[state].copy()}
potus_census['vote2016']['Other'] = 1 - potus_census['vote2016']['Hillary Clinton'] - potus_census['vote2016']['Donald Trump']
output = run_weighting_iteration(state_survey, census=potus_census, weigh_on=['vote2016'], verbose=0)
potus_weights = output['weights']['vote2016']
potus_weights = state_survey['vote2016'].astype(str).replace(potus_weights)
state_survey['weight'] = normalize_weights(state_survey['weight'] * potus_weights)
state_survey['lv_weight'] = normalize_weights(state_survey['weight'] * state_survey['lv_index'])
options = ['A Democratic candidate', 'A Republican candidate', 'Another candidate', 'Not decided']
survey_ = state_survey.loc[state_survey['vote_senate'].isin(options)].copy()
survey_['weight'] = normalize_weights(survey_['weight'])
survey_['lv_weight'] = normalize_weights(survey_['lv_weight'])
votes = survey_['vote_senate'].value_counts(normalize=True) * survey_.groupby('vote_senate')['lv_weight'].mean() * 100
votes = votes[options] * (100 / votes[options].sum())
print(votes)
print('-')
print_result_sen(**calc_result_sen(dem_vote=votes['A Democratic candidate'],
rep_vote=votes['A Republican candidate'],
n=lv_weighted_n))
print('-')
```
| github_jupyter |
# Live Twitter Sentiments for Cryptocurrencies
Plot the evolution in time of the tweets sentiment for a cryptocurrency. We will use the *tweepy*'s streaming to see the live evolution of the Twitter sentiments for the cryptocurrencies.
* *Inputs*: currency keywords to seach in Twitter, number of tweets to analyse the sentiement, plot update interval in seconds (default = 1.0 seconds).
* *Output*: Plot with sentiment analysis and the mean in time for a specific cryptocurrency.
* *Note*: The free Twitter plan lets you download *100 Tweets per search*, and you can search Tweets from the previous seven days. *Please check the limits of getting tweets per day or month before to use this script!*
### Requirements
* *Language*: Python 3.*
* *Dependencies*: tweepy = retrieve tweets using APIs; json = handling the API results, textblob = text operations and sentiment analysis, re = text processing, matplotlib = plots, numpy = numerical calculations, IPython = interactive plots into notebooks
* *Other tools*: Textblog Corpora for text processing: *python -m textblob.download_corpora*
## How to use
Complete your twitter API credential and your crypto keywords, number of tweets and run the entire notebook.
## Step 1: Import the python dependencies
```
import time, json, re
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
from textblob import TextBlob
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import clear_output
%matplotlib inline
```
## Step 2: Define your data
You need to define the keywords, number of tweets, the update interval, and your twitter API keys. Your can define the key here or read them from a JSON file.
```
# YOUR preference (to complete)
keywords = ["Bitcoin", 'BTC'] # a set of keywords for a crypto
noTweets = 10 # number of tweets/connections
secUpdate = 1.0 # update interval in seconds
# YOUR Twitter API information (to complete)
# if you have a local file with your info, ommit these lines
CONSUMER_KEY = 'YOUR DATA'
CONSUMER_SECRET = 'YOUR DATA'
ACCESS_TOKEN = 'YOUR DATA'
ACCESS_SECRET = 'YOUR DATA'
# Setting a JSON of your credentials (to complete)
creds = {"CONSUMER_KEY": CONSUMER_KEY, "CONSUMER_SECRET": CONSUMER_SECRET,
"ACCESS_TOKEN": ACCESS_TOKEN, "ACCESS_SECRET": ACCESS_SECRET}
# If you didnt define above, load credentials from json file
# (overwrite creds with data from file if available)
try:
print('-> Reading Twitter API credentials from file ... ')
with open("twitter_credentials.json", "r") as file:
creds = json.load(file)
print('Done!')
except:
print('! There is no twitter API credential file! Using the information you defined above!')
```
## Step 3: Define a custom class for Twitter streaming
We will use some variables as globals in order to input parameters from the main code (currency keywords to seach in Twitter, number of tweets to analyse the sentiement, plot refresh time) and to fill list with tweets sentiment, times of the sentiment analysis and means of the sentiments at a specific time. These list will be used to interactivelly plot the evolution of the sentiment and the mean of sentiments.
```
class listener(StreamListener):
def on_data(self,data):
global initime # to calculate the time of analysis
global inidatetime # to print the initial datetime
global count # counting the tweets
global t # list with the time of sentiment analysis
global sent # list with sentiments at moments t
global sentMeans # list of sentiment means at different time
global keywords # external - list with keywords for a crypto
global noTweets # external - number of tweets to get with your twitter API
global secUpdate # external - number of seconds to update the plot
# update the list for analysis time
currTime = int(time.time()-initime)
t.append(currTime)
# get the tweet data
all_data=json.loads(data)
# encode to unicode for different types of characters
tweet=all_data["text"].encode("utf-8")
# remove URLs from tweets
tweet = re.sub(r"http\S+", "", str(tweet))
# remove strange characters from the tweet
tweet=" ".join(re.findall("[a-zA-Z]+", str(tweet)))
# strip the spaces from the tweet
blob=TextBlob(tweet.strip())
# count the tweets
count=count+1
# update the list for sentiments and the means at different time
sent.append(blob.sentiment.polarity)
sentMeans.append(np.mean(sent))
# Plotting sentiment analysis in time for a cryptocurrency
# clear the plot
clear_output(wait=True)
# set axis, labels
plt.xlabel('Time')
plt.ylabel('Twitter sentiment')
# set grid
plt.grid()
# print the current mean of sentiments
print('Live Twitter sentiment analysis for cryptocurrencies')
print('**********************************************************************')
print('From: '+str(inidatetime)+' To: '+str(time.ctime()))
print('Sentiment Mean for '+str(keywords)+': '+str(np.mean(sent)))
# plot sentiments and means in time
plt.plot(t,sent, t,sentMeans)
# add legend
plt.legend(['Sentiment', 'Sentiment Mean'],loc='center left', bbox_to_anchor=(1, 0.5))
# plotting
plt.show()
# wait for update
plt.pause(secUpdate) # wait 1 sec!
# if we have the number of tweets, end the script
if count==noTweets:
return False
else:
return True
def on_error(self,status):
print(status)
```
## Step 4: Run the Twitter stream for sentiment analysis
Initialize all the variables and use the tweets stream for sentiment analysis plotting:
```
# Define external variables to be used inside the streaming class
t = [0] # list with time
sent = [0] # list with tweets sentiment in time
sentMeans = [0] # list with means of sentiment in time
count=0 # curent number of tweet
initime=time.time() # to calculate the time
inidatetime = time.ctime() # initial date time in readable format
# setup the twitter screaming
auth=OAuthHandler(creds['CONSUMER_KEY'],creds['CONSUMER_SECRET'])
auth.set_access_token(creds['ACCESS_TOKEN'],creds['ACCESS_SECRET'])
# start the stream with tweets using your keyworks
twitterStream = Stream(auth, listener(count))
twitterStream.filter(track=keywords)
```
### Hint
You can use this notebook for any twitter search, not limited to the cryptocurrencies!
Hf!
2018@muntisa
| github_jupyter |
# CNN MNIST
```
#importing functions from python3 to python2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
#importing numpy and tensorflow
import numpy as np
import tensorflow as tf
#ignore all the warnings and don't show them in the notebook
import warnings
warnings.filterwarnings('ignore')
#treshold on what messages are to be logged
tf.logging.set_verbosity(tf.logging.INFO)
#importing debug library
from tensorflow.python import debug as tf_debug
```
## Debugger
### Uncomment the below line and execute the code to run the debugger.
### Go to the link once you start execution http://localhost:6006/
```
#Uncomment the below line to run the debugger
#Add monitor=[hook] as a parameter to the estimators below
hook = tf_debug.TensorBoardDebugHook("localhost:6064")
#RPCs have a max payload size of 4194304 bytes (by default).
#Your program is raising an exception when debugger logic (within TensorBoardDebugHook) tries to send the graph
#of the model to TensorBoard. Perhaps the graph is large.For now, it is OK to prevent the debugger from
#sending the graph (and python tracebacks) to TensorBoard? Via setting send_traceback_and_source_code to False.
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer
# Reshape X to 4-D tensor: [batch_size, width, height, channels]
# MNIST images are 28x28 pixels, and have one color channel
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
# Convolutional Layer #1
# Computes 32 features using a 5x5 filter with ReLU activation.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1
# First max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
# Computes 64 features using a 5x5 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
# Second max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7 * 7 * 64]
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
# Dense Layer
# Densely connected layer with 1024 neurons
# Input Tensor Shape: [batch_size, 7 * 7 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits layer
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 10]
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
#if predict mode, run the estimator and return the values
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main(unused_argv):
# Load training data
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
#load the the train data labels
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
#load the test data
eval_data = mnist.test.images # Returns np.array
#load the test data labels
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
# Create the Estimator
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")
# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
mnist_classifier.train(
input_fn=train_input_fn,
steps=200,
hooks=[hook]) #[logging_hook]
# Evaluate the model and print results
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
#Run the model, steps set to 200 instead of 20000 as the execution time was large
#Changing steps back to 20000 in model training results in an accuracy of 97%
if __name__ == "__main__":
tf.app.run()
```
| github_jupyter |
### AD470 - Module 7 Introduction to Deep LearningProgramming Assignment
#### Andrew Boyer
#### Brandan Owens
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.io
from sklearn.preprocessing import StandardScaler
import tensorflow
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
```
#### Q.1(a) Use pandas to read in the dataset “Churn_Modelling.csv”
```
churn_df = pd.read_csv("../dataFiles/Churn_Modelling.csv")
churn_df.columns
```
#### (b) Create the following bar plots.
```
sns.countplot(data = churn_df, x = 'Exited' )
sns.countplot(data = churn_df , x = 'Geography', hue = 'Exited')
sns.barplot(data=churn_df , x= 'Geography', y= 'Balance')
```
#### (c) From the dataframe, find the percentage of people who exited, and the percentage of people who did not exit.
```
churn_df['Exited'].value_counts()/churn_df['Exited'].count()*100
```
#### (d) Check for any missing values in the dataframe.
```
churn_df.isnull().values.any()
```
#### (e) Define X and y
```
X = churn_df.drop(['RowNumber', 'CustomerId', 'Surname', 'Exited'], axis=1)
y = churn_df['Exited']
```
#### (f) Get dummies for all categorical variables of X, remember to set drop_first = True.
```
X = pd.get_dummies(X, drop_first = True)
X
```
#### (g) Split the dataset into training set and test set. test_size=0.2, random_state=0
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
```
#### (h) Use the following codes to do the feature scaling on the training and test sets. (Standardize all numerical variables by subtracting the means and dividing each variable by its standard deviation.)
```
sc_x = StandardScaler()
X_train = pd.DataFrame(sc_x.fit_transform(X_train), columns=X.columns.values)
X_test = pd.DataFrame(sc_x.transform(X_test), columns=X.columns.values)
```
#### (i) Build a 4-layer neural network.
```
#model = keras.Sequential([
# layers.Dense(6, activation='relu', input_shape=[11]),
# layers.Dense(12, activation='relu'),
# layers.Dense(24, activation='relu'),
# layers.Dense(1, activation='sigmoid'),
#])
model = Sequential()
model.add(Dense(6, input_shape=(11,), activation='relu'))
model.add(Dense(12, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
#### (j) Compile the neural network.
```
model.compile(optimizer='adam',
loss = 'binary_crossentropy',
metrics=['accuracy'])
#model.summary()
#x_partial_train = X_train[:100]
#y_partial_train = y_train[:100]
#x_val = X_train[100:]
#y_val = y_train[100:]
```
#### (k) Fit the model on training set. Set the batch_size =10, run for 100 epochs.
```
history = model.fit(
X_train, y_train,
validation_data=(X_test,y_test),
epochs=100,
batch_size =10,
)
```
#### (l) Evaluate the model on test set.
```
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
plt.plot(epochs, loss_values, 'bo', label='Training Loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training Accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
#### (m) Finally, predict the probability of y = Exited on the test set.
```
prediction = model.predict(X_test)
print(prediction)
new_pred = (prediction > 0.6)
true_count = np.count_nonzero(new_pred)
print(true_count/new_pred.size)
print("% of employees that have a 60% or greater chance of leaving the company")
```
#### Q.2 (a) Download the file 'natural_images.zip', and extra the files.
```
import zipfile
local_zip = "../dataFiles/natural_images.zip"
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('natural_images')
```
#### (b) Use os.listdir to create a list of labels.
```
os.listdir("natural_images")
```
#### (c) Display the first 5 images of each class.
```
from IPython.display import Image, display
display(Image( image file))
```
#### (d) Create the following barplot.
#### (e) Use cv2.imread() to convert images into numpy array (X). Then, use cv2.resize(), so that each image has the size (32,32) Create an array which contains the label of each image (Y).
#### (f) Print the shape of images (X) and shape of labels (Y).
#### (g) Standardize X by dividing X by 255.
#### (h) Use LabelEncoder() to encode Y. Use to_categorical() covert Y into categorical numpy array.
#### (i) Split the data into training set and test set. test_size = 0.33, random_state = 46.
#### (j) But a CNN model- first layer is Conv2D, filters =32, kernel_size = (5,5), activation = relu.- second layer is MaxPool2D, pool_size = (2,2)- third layer is Conv2D, filters =64, kernel_size = (3,3), activation = relu.- fourth layer is MaxPool2D, pool_size = (2,2)- fifth layer to flatten the tensors.- sixth layer is Dense, output shape = 256, activation = relu.- seventh layer is Dense, output shape = 8, activation = softmax.
#### (k) Compile the modelloss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']
#### (l) fit the model, epochs = 25, validation_split = 0.2
#### (m)Plot the change in loss score on training set and validation set over epochs.
#### (n) Plot the change in accuracy on training set and validation set over epochs.
#### (o) Retrain the model using the entire training set and set epochs = 5. Evaluate the model on the test set.
| github_jupyter |
# OptNet/qpth Example Sudoku Notebook
*By [Brandon Amos](https://bamos.github.io) and [J. Zico Kolter](http://zicokolter.com/).*
---
This notebook is released along with our paper
[OptNet: Differentiable Optimization as a Layer in Neural Networks](https://arxiv.org/abs/1703.00443).
This notebook shows an example of constructing an
OptNet layer in PyTorch with our [qpth library](https://github.com/locuslab/qpth)
to solve [the game Sudoku](https://en.wikipedia.org/wiki/Sudoku)
as a prediction problem from data.
See [our qpth documentation page](https://locuslab.github.io/qpth/)
for more details on how to use `qpth`.
The experiments for our paper that use this library are in
[this repo](https://github.com/locuslab/optnet).
Specifically [here](https://github.com/locuslab/optnet/tree/master/sudoku)
is the full source code for the publihsed version of Sudoku.
## Setup and Dependencies
+ Python/numpy/[PyTorch](https://pytorch.org)
+ [qpth](https://github.com/locuslab/qpth):
*Our fast QP solver for PyTorch released in conjunction with this paper.*
+ [bamos/block](https://github.com/bamos/block):
*Our intelligent block matrix library for numpy, PyTorch, and beyond.*
+ Optional: [bamos/setGPU](https://github.com/bamos/setGPU):
A small library to set `CUDA_VISIBLE_DEVICES` on multi-GPU systems.
```
import os
import sys
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Function, Variable
from torch.nn.parameter import Parameter
import torch.nn.functional as F
from qpth.qp import QPFunction
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('bmh')
%matplotlib inline
```
# Setup: Download the data and pretrained model
+ The pre-trained model is for later.
The following command should download everything to a tmp directory for you
if you have the `wget` and `tar` commands installed.
+ (Sorry for the bad form here)
```
tmpDir = "/tmp/optnet.sudoku"
cmd = ('mkdir {}; cd {} &&'
'wget "http://joule.isr.cs.cmu.edu:11235/optnet/arxiv.v1.sudoku.tgz" && '
'tar xf arxiv.v1.sudoku.tgz').format(*[tmpDir]*2)
dataDir = os.path.join(tmpDir, 'arxiv.v1.sudoku')
assert os.system(cmd) == 0
sys.path.append(tmpDir+'/arxiv.v1.sudoku')
import models # From /tmp/optnet.sudoku/arxiv.v1.sudoku/models.py
testPct = 0.1
with open('{}/2/features.pt'.format(dataDir), 'rb') as f:
X = torch.load(f)
with open('{}/2/labels.pt'.format(dataDir), 'rb') as f:
Y = torch.load(f)
N, nFeatures = X.size(0), int(np.prod(X.size()[1:]))
nTrain = int(N*(1.-testPct))
nTest = N-nTrain
trainX = X[:nTrain]
trainY = Y[:nTrain]
testX = X[nTrain:]
testY = Y[nTrain:]
```
## What the data for the Sudoku task looks like
The inputs are incomplete boards and the outputs
are the completed boards. Here's what the first
input and output in the test set looks like.
```
def decode_onehot(encoded_board):
"""Take the unique argmax of the one-hot encoded board."""
v,I = torch.max(encoded_board, 0)
return ((v>0).long()*(I+1)).squeeze()
print("First testing example input (unsolved Sudoku board): ", decode_onehot(testX[0]))
print("First testing example output (solved Sudoku board): ", decode_onehot(testY[0]))
```
You may have noticed that we had to decode those examples.
That's because they're actually *one-hot encoded* for how
we're going to model the task.
That means that instead of representing the values as
something between 1 and 4, they're represented
as a 4-dimensional vector with a 1 in the index of the value.
Here's what the same first example from the test set
actually looks like:
```
print("First test example input one-hot encoded (unsolved Sudoku board): ", testX[0])
print("First test example output one-hot encoded (solved Sudoku board): ", testY[0])
```
# Defining a model for this task
We've now turned (mini-)Sudoku into a machine learning task that
you can apply any model and learning algorithm to.
In this notebook, we'll just show how to initialize and train
an OptNet model for this task.
However you can play around and swap this out for any
model you want!
Check out [our baseline models](https://github.com/locuslab/optnet/blob/master/sudoku/models.py)
if you're interested.
Sudoku is actually an integer programming problem but
we can relax it to an LP (or LP with a small ridge term,
which we'll actually use) that can be expressed as:
```
y* = argmin_y 0.5 eps z^T z - p^T y
s.t. Ay = b
y >= 0
```
To quickly explain this, the quadratic term `0.5 eps z^T z`
is a small ridge term so we can use `qpth`,
`p` is the (flattened) one-hot encoded input,
the `-p^T y` term constrains the solution to contain
the same pieces as the unsolved board,
and the linear equality constraints `Ay = b`
encode the constraints of Sudoku (the row, columns,
and sub-blocks must contain all of the digits).
If you want to check your understanding of this:
1. What do some example constraints `a_i^T y = b_i` look like?
2. What happens if we remove the linear equality constraint?
Implementing this model is just a few lines of PyTorch with our qpth library.
Note that in this notebook we'll just execute this on the CPU,
but for performance reasons you should use a GPU for serious
experiments:
```
class OptNet(nn.Module):
def __init__(self, n, Qpenalty):
super().__init__()
nx = (n**2)**3
self.Q = Variable(Qpenalty*torch.eye(nx).double())
self.G = Variable(-torch.eye(nx).double())
self.h = Variable(torch.zeros(nx).double())
A_shape = (40, 64) # Somewhat magic, it's from the true solution.
self.A = Parameter(torch.rand(A_shape).double())
self.b = Variable(torch.ones(A_shape[0]).double())
def forward(self, puzzles):
nBatch = puzzles.size(0)
p = -puzzles.view(nBatch, -1)
return QPFunction(verbose=-1)(
self.Q, p.double(), self.G, self.h, self.A, self.b
).float().view_as(puzzles)
```
That's it! Let's randomly initialize this model and see what it does on the first test set example. What do you expect?
```
model = OptNet(2, 0.1)
pred = model(Variable(testX[0].unsqueeze(0))).squeeze().data
print("First test example input (unsolved Sudoku board): ", decode_onehot(testX[0]))
print("First test example output (TRUE solved Sudoku board): ", decode_onehot(testY[0]))
print("First test example prediction: ", decode_onehot(pred))
```
Wow that prediction is way off!! That's expected since the model was randomly initialized. Note that at this point, some of the constraints actually make it impossible to match the unsolved board (like the `4` at the top right corner)
Let's look a random nonsense constraint that the model just satisfied. Here are the coefficients in the first row, `a_1` and `b_1`. The last line here
shows that the constraint is acutally satisfied (up to machine precision).
```
np.set_printoptions(precision=2)
a0 = model.A[0].data.numpy()
b0 = model.b.data[0]
z = pred.numpy().ravel()
print('First row of A:\n', a0)
print('-'*30)
print('First entry of b: ', b0)
print('-'*30)
print('a0^T z - b: ', np.dot(a0, z) - b0)
```
# Training the model
Let's start training this model my comparing the predictions
to the true solutions and taking gradient steps.
This takes a while to run (overnight on a GPU), so here
we'll just take 10 steps through the first 10 training examples
to illustrate what the full training would look like.
```
loss_fn = torch.nn.MSELoss()
# Initialize the optimizer.
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(10):
x_batch = Variable(trainX[t].unsqueeze(0))
y_batch = Variable(trainY[t].unsqueeze(0))
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(x_batch)
# Compute and print loss.
loss = loss_fn(y_pred, y_batch)
print('Iteration {}, loss = {:.2f}'.format(t, loss.data[0]))
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
```
# Looking at a pre-trained model
Imagine you kept that running for a while.
Let's load my pre-trained model we downloaded earlier and
see the predictions on the first test example again:
```
A_file = os.path.join(tmpDir, 'arxiv.v1.sudoku', 'pretrained-optnet-A.pth')
trainedA = torch.load(A_file)
trainedModel = OptNet(2, 0.2)
trainedModel.A.data = trainedA
pred = trainedModel(Variable(testX[0].unsqueeze(0))).data.squeeze()
print("First test example input (unsolved Sudoku board): ", decode_onehot(testX[0]))
print("First test example output (TRUE solved Sudoku board): ", decode_onehot(testY[0]))
print("First test example prediction: ", decode_onehot(pred))
```
We did it! With just a few lines of code we've trained
an intuitive model that solves Sudoku.
As a closing note, what does the trained `A` matrix look like?
With this formulation, we don't expect it to be the nice,
sparse coefficient matrix encoding the rules we typically
think of Sudoku as since any row-transformed
version of this matrix is an equivalent valid solution:
```
trainedA
```
| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorHargaInflasi/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorHargaInflasi/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorHargaInflasi/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorHargaInflasi/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorHargaInflasi/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorHargaInflasi3_3"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_SektorHargaInflasi"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Harga Inflasi"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorHargaInflasi/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
# Index对象的创建,、查、改、增、删和使用
想要用好pandas,必须了解其核心对象之一的**索引**。
- 索引类似于元组,其本身是不能赋值修改的;
- 其在数据进行整体运算时,辅助自动对齐,这是pandas不同于其他数据处理库的一大特征;
- 多层索引可以帮助改变表的形态,如透视表等。
所以,这一章要仔细学习。
```
import numpy as np
import pandas as pd
```
# 1. 单层索引
## 1.1 创建
##### `pd.Index(data, dtype=Object, name=None)`
- name:一维列表
- dtype:索引元素的类型,默认为object型
- name:索引的名字,类似于列的名字
```
data = ['a','b','c']
index = pd.Index(data, name = 'name1')
index
```
**从返回值可以看到,index由三部分组成,可以分别查看。**
```
index.name
index.values
index.dtype
```
## 1.2 查
- 查询方式和一维ndarray或Series的.iloc[]完全一样。
```
index[0] # scalar,返回值
index[0:2] # 范围,返回index
index[[0, 2]] # 列表,返回index
mask = [True, False, True] # mask,返回index
index[mask]
```
## 1.3 改索引名
虽然索引的值是不能修改的,但是名字确是可以修改的。
### 1.3.1 直接改
```
index.name = 'new_name'
index
index.set_names('new_name')
```
## 1.4 增
### 1.4.1 按位置添加一行
##### `Index.insert(loc, value)`
- loc:位置编号
- value:值
```
index
index.insert(1,'d')
```
### 1.4.2 尾部添加多行
##### `Index.append(other)`
- other:其他索引对象
```
index1 = index.copy()
index1
index1.append(index)
```
### 1.4.2 并
##### `Index.union(other)`
```
index2 = pd.Index(['b','c','d'])
index2
index1.union(index2)
```
## 1.5 删
### 1.5.1 按位置删除一行
##### `Index.delete(loc)`
- loc:位置编号
```
index1.delete(1)
```
### 1.5.2 按索引删除多行
##### `Index.drop(labels)`
- labels:索引列表
```
index1.drop(['a','b'])
```
### 1.5.3 取交
##### `Index.intersection(other)`
```
index1.intersection(index2)
```
# 2. 多层索引
## 2.1 创建
##### `pd.MultiIndex.from_tuples(labels, names=None)`
- labels:元组或列表的列表;
- names:名字的列表。
```
# data = [['a','one'],['a','two'],['b','one']]
data = [('a','one'),('a','two'),('b','one')]
index = pd.MultiIndex.from_tuples(data, names=['name1','name2'])
index
s = pd.Series([1,2,3], index = index)
s
```
## 2.2 查
- 查询方法和单层索引完全一致。
```
index[0] # scalar,返回值
index[0:2] # 范围,返回MultiIndex
index[[0,2]] # 列表,返回MultiIndex
mask = [True, False, True] # mask,返回MultiIndex
index[mask]
```
##### 获取某一层索引 MultiIndex.get_level_values(level)
- level:int,选中的那一层
```
index.get_level_values(0)
index.get_level_values(1)
```
## 2.3 改
### 2.3.1 改索引名
##### `MultiIndex.set_names(names, level=None, inplace=False)`
- names:要设置的名字,可以为名字的列表;
- level:多层索引需要设置修改的索引层次,可以为列表,要与names匹配;
- inplace:是否原地修改。
```
index.set_names('new_name_1',level=0)
```
### 2.3.2 改索引层次顺序
##### `MultiIndex.swaplevel(i=-2, j=-1)`
- 改变level i 和level j的次序
```
index.swaplevel()
```
##### `Series.swaplevel(i=-2, j=-1)`
##### `DataFrame.swaplevel(i=-2, j=-1, axis=1)`
- axis:0-行索引,1-列索引。
这两个函数更实用一些。
```
s.swaplevel()
columns = index.copy()
columns.set_names( names = ['name3','name4'], level = [0,1], inplace = True) #列索引取和行索引相同,只是改了名字
df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index= index, columns = columns)
df
df.swaplevel(axis=1) # 交换列索引顺序
```
# 3. 多层索引使用方法
当对values进行查看时,多层索引可以分开使用。
```
df1 = df.copy()
df1
```
**索引为空不代表缺失,缺省写法,意思是之前的索引一致。**
## 3.1 对于外层索引
**记住:**
- 无论是Series还是DataFrame,外层索引都是可以直接使用,也就是说可以认为只有这一层索引;
- **用法和第二篇查、改、增、删提到的方法完全相同**。
### 3.1.1 []
快捷操作,还是四种用法。
```
df1
df1['b'] # 列外层
df1[['a','b']] # 列外层
df1[0:2] # 行外层
mask =[True, False, True] # 行外层
df1[mask]
```
### 3.1.2 .loc[]
行和列索引都使用索引形式。
**下面都以第一维度为例,第二维可以类比。**
```
df1.loc['a','b'] # 单行索引'a'
df1.loc['a':'b', 'b'] #范围'a': 'b'
df1.loc[['a','b'], 'b'] #列表
mask = [True, False, True] # mask
df1.loc[mask,'b']
```
### 3.1.3 .iloc[]
这个简单,可以把索引都忽略掉,行和列都使用位置形式。
```
df1.iloc[0,0:2]
df1.iloc[0:2, 0:2]
df1.iloc[[0,1],0:2]
mask = [True, False, True]
df1.iloc[mask,0:2]
```
## 3.2 对于内层索引
- **内层索引不可直接使用,必须先外层、再内层,直接使用会报错;**
- 内层只能使用单索引形式,其他形式报错。
### 3.2.1 [ , ]
快捷操作,只有一种用法,取出一列。
```
df1
df1['a','one'] #取一列,先外层单列索引,再内层单列索引,其他形式都报错
```
### 3.2.2 .loc[ , ]
```
df1.loc['a','one'] # 取一行,先外层单行索引,再内层单列索引,其他形式都报错
```
### 3.2.3 .iloc[ , ]
这种方法不受影响,因为 .iloc[] 无视索引,只按照位置定位,所以和3.1.3节外层索引部分完全相同。
## 3.3 xs直接选取法
适合在单层level选取,不能行列同时操作。
##### `Series.xs(key, level=None, drop_level=True)`
##### `DataFrame.xs(key, axis=0, level=None, drop_level=True)`
- key: 要选取的索引值或其列表;
- axis:0-行索引,1-列索引;
- level:索引层次;
- drop_level:True or False,是否显示用于选取的level索引,默认不显示。
```
df1 = df.copy()
df1
df1.xs('one', axis=0, level=1) # 行索引的level 1, 有两行
df1.xs('two', axis=1, level=1) # 列索引的level 1,有一列
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import os
import sys
sys.path.append('..')
import geopandas as gpd
from shapely.geometry import Point
from shapely.geometry import LineString
from shapely.geometry import MultiPoint
GAS_STATIONS_PATH = os.path.join('..', 'data', 'raw', 'input_data', 'Eingabedaten', 'Tankstellen.csv')
gas_stations_df = pd.read_csv(GAS_STATIONS_PATH, sep=';', names=['id', 'Name', 'Company', 'Street', 'House_Number', 'Postalcode', 'City', 'Lat', 'Long'],index_col='id')
gas_stations_df['Position'] = gas_stations_df.apply(lambda row: Point(row.Long, row.Lat), axis=1)
gas_stations_df.head(3)
positions = gpd.GeoSeries(gas_stations_df['Position'])
```
### Bounding Box Approach
```
def get_gas_stations_in_area(bounding_box):
""" bounding box is a (minx, miny, maxx, maxy) tuple""" # x = long, y = lat
min_long, min_lat, max_long, max_lat = bounding_box
assert min_long < max_long
assert min_lat < max_lat
return set(positions.cx[min_long:max_long,min_lat:max_lat].index)
def get_gas_stations_in_boxes(bounding_boxes):
ids = [get_gas_stations_in_area(box) for box in bounding_boxes]
return list(set.union(*ids))
boxes_potsdam_berlin = [((52.34416775186111, 13.092272842330203), (52.35864093016666, 13.187254280776756)),((52.35864093016666, 13.044782123106984), (52.37311410847222, 13.187254280776756)),((52.37311410847222, 13.021036763495317), (52.38758728677778, 13.210999640388309)),((52.38758728677778, 13.021036763495317), (52.41653364338889, 13.234744999999975)),((52.41653364338889, 13.139763561553536), (52.431006821694446, 13.234744999999975)),((52.431006821694446, 13.16350892116509), (52.44548, 13.258490359611642)),((52.44548, 13.16350892116509), (52.459953178305554, 13.282235719223195)),((52.459953178305554, 13.16350892116509), (52.474426356611126, 13.305981078834861)),((52.474426356611126, 13.187254280776756), (52.48889953491667, 13.305981078834861)),((52.48889953491667, 13.210999640388309), (52.503372713222234, 13.424707876892967)),((52.503372713222234, 13.234744999999975), (52.53231906983335, 13.448453236504633)),((52.53231906983335, 13.35347179805808), (52.5467922481389, 13.448453236504633))]
boxes_small_potsdam_berlin = [((52.380350697625, 13.044782123106984), (52.40206046508334, 13.068527482718537)),((52.37311410847222, 13.068527482718537), (52.40206046508334, 13.080400162524484)),((52.36587751931945, 13.080400162524484), (52.40206046508334, 13.104145522136037)),((52.35864093016666, 13.104145522136037), (52.394823875930555, 13.11601820194187)),((52.35864093016666, 13.11601820194187), (52.38758728677778, 13.127890881747703)),((52.35864093016666, 13.127890881747703), (52.394823875930555, 13.139763561553536)),((52.35864093016666, 13.139763561553536), (52.40206046508334, 13.16350892116509)),((52.35864093016666, 13.16350892116509), (52.41653364338889, 13.175381600970923)),((52.380350697625, 13.175381600970923), (52.45271658915278, 13.187254280776756)),((52.380350697625, 13.187254280776756), (52.459953178305554, 13.19912696058259)),((52.394823875930555, 13.19912696058259), (52.467189767458336, 13.210999640388309)),((52.431006821694446, 13.210999640388309), (52.4816629457639, 13.222872320194142)),((52.43824341084722, 13.222872320194142), (52.48889953491667, 13.234744999999975)),((52.44548, 13.234744999999975), (52.49613612406946, 13.246617679805809)),((52.459953178305554, 13.246617679805809), (52.51060930237501, 13.258490359611642)),((52.467189767458336, 13.258490359611642), (52.517845891527784, 13.270363039417362)),((52.474426356611126, 13.270363039417362), (52.52508248068057, 13.282235719223195)),((52.48889953491667, 13.282235719223195), (52.52508248068057, 13.294108399029028)),((52.49613612406946, 13.294108399029028), (52.52508248068057, 13.305981078834861)),((52.503372713222234, 13.305981078834861), (52.52508248068057, 13.377217157669747)),((52.503372713222234, 13.377217157669747), (52.53231906983335, 13.412835197087134)),((52.51060930237501, 13.412835197087134), (52.53231906983335, 13.424707876892967))]
def js_box_2_python_box(js_boxes):
return [(min_long, min_lat, max_long, max_lat) for ((min_lat,min_long),(max_lat,max_long)) in js_boxes]
boxes_potsdam_berlin_nice = js_box_2_python_box(boxes_potsdam_berlin)
res = get_gas_stations_in_boxes(boxes_potsdam_berlin_nice)
gpd.GeoSeries(gas_stations_df.loc[res]['Position']).plot()
boxes_potsdam_berlin_nice = js_box_2_python_box(boxes_small_potsdam_berlin)
res = get_gas_stations_in_boxes(boxes_potsdam_berlin_nice)
gpd.GeoSeries(gas_stations_df.loc[res]['Position']).plot();
```
### Buffer Approach
```
path_potsdam_berlin = [(52.390530000000005, 13.064540000000001),(52.39041, 13.065890000000001),(52.39025, 13.06723),(52.39002000000001, 13.068810000000001),(52.389970000000005, 13.069350000000002),(52.38998, 13.06948),(52.389860000000006, 13.07028),(52.38973000000001, 13.07103),(52.38935000000001, 13.07352),(52.3892, 13.07463),(52.38918, 13.075120000000002),(52.389210000000006, 13.07553),(52.389300000000006, 13.0759),(52.3894, 13.076130000000001),(52.389520000000005, 13.07624),(52.38965, 13.07638),(52.389880000000005, 13.0767),(52.390100000000004, 13.077110000000001),(52.390330000000006, 13.077770000000001),(52.390440000000005, 13.078660000000001),(52.39052, 13.079400000000001),(52.390570000000004, 13.08004),(52.39056000000001, 13.08037),(52.390550000000005, 13.0806),(52.390530000000005, 13.080990000000002),(52.390420000000006, 13.083100000000002),(52.390440000000005, 13.083400000000001),(52.39038000000001, 13.083430000000002),(52.39011000000001, 13.0836),(52.38853, 13.084660000000001),(52.38801, 13.0851),(52.38774, 13.085410000000001),(52.38754, 13.085730000000002),(52.38729000000001, 13.086300000000001),(52.38689, 13.087610000000002),(52.386500000000005, 13.088960000000002),(52.38611, 13.09026),(52.38602, 13.090700000000002),(52.3858, 13.09121),(52.385290000000005, 13.092300000000002),(52.38477, 13.09331),(52.384040000000006, 13.094650000000001),(52.383500000000005, 13.095670000000002),(52.38302, 13.096580000000001),(52.37538000000001, 13.110970000000002),(52.37485, 13.112020000000001),(52.37471000000001, 13.112340000000001),(52.37436, 13.113220000000002),(52.373990000000006, 13.114300000000002),(52.37379000000001, 13.11494),(52.373580000000004, 13.11578),(52.37304, 13.11809),(52.37266, 13.119740000000002),(52.37252, 13.120540000000002),(52.37238000000001, 13.121540000000001),(52.37227000000001, 13.122710000000001),(52.37225, 13.12311),(52.372220000000006, 13.12376),(52.372220000000006, 13.124830000000001),(52.372260000000004, 13.128100000000002),(52.37229000000001, 13.131340000000002),(52.37234, 13.1369),(52.37232, 13.13785),(52.37228, 13.13859),(52.37220000000001, 13.13958),(52.37216, 13.140500000000001),(52.372150000000005, 13.141950000000001),(52.37218000000001, 13.14399),(52.37228, 13.147120000000001),(52.3723, 13.14906),(52.37232, 13.151140000000002),(52.37228, 13.15149),(52.37225, 13.151850000000001),(52.37219, 13.152070000000002),(52.372130000000006, 13.152210000000002),(52.372040000000005, 13.152360000000002),(52.371930000000006, 13.15248),(52.37181, 13.152560000000001),(52.37167, 13.152600000000001),(52.37153000000001, 13.152600000000001),(52.3714, 13.152550000000002),(52.371300000000005, 13.15248),(52.3712, 13.152370000000001),(52.37106000000001, 13.152130000000001),(52.37098, 13.151840000000002),(52.37095000000001, 13.151560000000002),(52.370960000000004, 13.15136),(52.371, 13.151090000000002),(52.37109, 13.150830000000001),(52.3712, 13.15066),(52.37129, 13.15056),(52.371460000000006, 13.15046),(52.37163, 13.150430000000002),(52.37181, 13.150400000000001),(52.37322, 13.150360000000001),(52.373670000000004, 13.150350000000001),(52.37375, 13.15032),(52.37451, 13.150310000000001),(52.375710000000005, 13.15028),(52.37670000000001, 13.150250000000002),(52.376960000000004, 13.150250000000002),(52.37715000000001, 13.150220000000001),(52.37742, 13.150160000000001),(52.377720000000004, 13.15013),(52.378040000000006, 13.150120000000001),(52.37812, 13.15009),(52.37825, 13.15004),(52.378800000000005, 13.15004),(52.379270000000005, 13.15009),(52.37962, 13.150150000000002),(52.380010000000006, 13.150240000000002),(52.380370000000006, 13.150360000000001),(52.380990000000004, 13.150620000000002),(52.38165000000001, 13.15098),(52.383500000000005, 13.152170000000002),(52.38440000000001, 13.15277),(52.3858, 13.153670000000002),(52.387080000000005, 13.1545),(52.38745, 13.154760000000001),(52.38768, 13.15496),(52.38794000000001, 13.155190000000001),(52.388380000000005, 13.155660000000001),(52.38891, 13.156350000000002),(52.38927, 13.156920000000001),(52.38965, 13.15755),(52.38984000000001, 13.15792),(52.39011000000001, 13.158520000000001),(52.390460000000004, 13.15943),(52.39074, 13.160380000000002),(52.392900000000004, 13.169300000000002),(52.39408, 13.1742),(52.39439, 13.175370000000001),(52.394830000000006, 13.176800000000002),(52.395320000000005, 13.17805),(52.39578, 13.179070000000001),(52.39621, 13.17993),(52.39678000000001, 13.18092),(52.39714000000001, 13.18148),(52.3975, 13.181970000000002),(52.398340000000005, 13.183000000000002),(52.39922000000001, 13.184000000000001),(52.399530000000006, 13.18438),(52.40012, 13.18504),(52.400940000000006, 13.185910000000002),(52.40171, 13.186750000000002),(52.402260000000005, 13.187420000000001),(52.403830000000006, 13.18917),(52.407830000000004, 13.193690000000002),(52.40982, 13.19593),(52.410230000000006, 13.19631),(52.41085, 13.19678),(52.411280000000005, 13.197030000000002),(52.41158000000001, 13.197180000000001),(52.41223, 13.197420000000001),(52.412620000000004, 13.197510000000001),(52.413030000000006, 13.19757),(52.413880000000006, 13.19757),(52.41407, 13.197560000000001),(52.41452, 13.197470000000001),(52.41536000000001, 13.19729),(52.41561, 13.197210000000002),(52.416720000000005, 13.19697),(52.417570000000005, 13.196760000000001),(52.41827000000001, 13.196610000000002),(52.42042000000001, 13.196130000000002),(52.4217, 13.195850000000002),(52.422740000000005, 13.19561),(52.423030000000004, 13.195500000000001),(52.42322000000001, 13.195390000000002),(52.423410000000004, 13.195260000000001),(52.42360000000001, 13.195120000000001),(52.42381, 13.194930000000001),(52.42409000000001, 13.194640000000001),(52.42443, 13.194170000000002),(52.424820000000004, 13.1935),(52.425160000000005, 13.19293),(52.42549, 13.192450000000001),(52.425720000000005, 13.192160000000001),(52.42607, 13.191820000000002),(52.426300000000005, 13.191640000000001),(52.42649, 13.19152),(52.42685, 13.191350000000002),(52.427310000000006, 13.191230000000001),(52.427530000000004, 13.191210000000002),(52.427890000000005, 13.191230000000001),(52.42887, 13.191460000000001),(52.43121000000001, 13.19204),(52.43244000000001, 13.192340000000002),(52.43292, 13.19246),(52.433400000000006, 13.1926),(52.43365000000001, 13.19269),(52.43403000000001, 13.192870000000001),(52.434470000000005, 13.193150000000001),(52.43478, 13.19339),(52.43506000000001, 13.193650000000002),(52.435340000000004, 13.19396),(52.43573000000001, 13.194440000000002),(52.43797000000001, 13.197270000000001),(52.438610000000004, 13.198080000000001),(52.44021000000001, 13.2001),(52.44169, 13.20198),(52.44489, 13.206010000000001),(52.446180000000005, 13.207640000000001),(52.45031, 13.212860000000001),(52.47092000000001, 13.238930000000002),(52.472350000000006, 13.240730000000001),(52.47289000000001, 13.24136),(52.474680000000006, 13.243440000000001),(52.47838, 13.247610000000002),(52.48109, 13.250670000000001),(52.48225000000001, 13.25201),(52.482800000000005, 13.2527),(52.48602, 13.25679),(52.48906, 13.260610000000002),(52.491670000000006, 13.26392),(52.49271, 13.26524),(52.49497, 13.268040000000001),(52.495160000000006, 13.268360000000001),(52.495760000000004, 13.26917),(52.496280000000006, 13.26984),(52.497170000000004, 13.27105),(52.497840000000004, 13.27194),(52.49857, 13.272870000000001),(52.49895000000001, 13.273460000000002),(52.49916, 13.273930000000002),(52.49929, 13.27434),(52.499390000000005, 13.274840000000001),(52.499460000000006, 13.275440000000001),(52.49949, 13.275970000000001),(52.49956, 13.277550000000002),(52.49963, 13.27838),(52.49969, 13.278830000000001),(52.499770000000005, 13.27918),(52.499900000000004, 13.279630000000001),(52.500060000000005, 13.28002),(52.500220000000006, 13.280330000000001),(52.50027000000001, 13.28035),(52.500370000000004, 13.28049),(52.50054, 13.280690000000002),(52.5007, 13.28082),(52.50085000000001, 13.280880000000002),(52.501020000000004, 13.2809),(52.50117, 13.280880000000002),(52.50155, 13.280740000000002),(52.50173, 13.280690000000002),(52.501960000000004, 13.28068),(52.502210000000005, 13.280780000000002),(52.502390000000005, 13.28086),(52.503310000000006, 13.28194),(52.50368, 13.282330000000002),(52.503930000000004, 13.282520000000002),(52.50423000000001, 13.28269),(52.504560000000005, 13.28279),(52.50522, 13.282820000000001),(52.50553000000001, 13.28284),(52.50583, 13.282890000000002),(52.50598, 13.282940000000002),(52.506350000000005, 13.283100000000001),(52.506620000000005, 13.28326),(52.508250000000004, 13.284370000000001),(52.509620000000005, 13.28527),(52.51070000000001, 13.28592),(52.511100000000006, 13.286100000000001),(52.511210000000005, 13.286150000000001),(52.51158, 13.286230000000002),(52.511700000000005, 13.286380000000001),(52.511810000000004, 13.286420000000001),(52.51239, 13.28658),(52.512570000000004, 13.28668),(52.512800000000006, 13.28687),(52.5129, 13.286890000000001),(52.51297, 13.286890000000001),(52.51299, 13.28706),(52.51301, 13.28738),(52.51308, 13.28842),(52.51274, 13.288520000000002),(52.51194, 13.288760000000002),(52.511300000000006, 13.288960000000001),(52.510560000000005, 13.289200000000001),(52.510380000000005, 13.289240000000001),(52.51043000000001, 13.289950000000001),(52.510510000000004, 13.291240000000002),(52.51066, 13.293750000000001),(52.51122, 13.30202),(52.51147, 13.30563),(52.51184000000001, 13.31169),(52.512080000000005, 13.315150000000001),(52.51239, 13.320010000000002),(52.51241, 13.320640000000001),(52.51234, 13.32089),(52.512280000000004, 13.320950000000002),(52.51218, 13.321090000000002),(52.51207, 13.32136),(52.51203, 13.3215),(52.51202000000001, 13.321800000000001),(52.51203, 13.322030000000002),(52.512060000000005, 13.322260000000002),(52.512150000000005, 13.322560000000001),(52.512280000000004, 13.32277),(52.512350000000005, 13.322840000000001),(52.51240000000001, 13.322880000000001),(52.51249000000001, 13.323070000000001),(52.512530000000005, 13.32314),(52.512550000000005, 13.32319),(52.512600000000006, 13.32333),(52.51263, 13.32342),(52.51265000000001, 13.323550000000001),(52.512950000000004, 13.32801),(52.513180000000006, 13.33182),(52.513470000000005, 13.33604),(52.5142, 13.346560000000002),(52.51433, 13.348690000000001),(52.51429, 13.34889),(52.51415, 13.349290000000002),(52.51404, 13.349480000000002),(52.513960000000004, 13.349680000000001),(52.51393, 13.349810000000002),(52.51391, 13.350100000000001),(52.51393, 13.35035),(52.513980000000004, 13.350570000000001),(52.514050000000005, 13.350740000000002),(52.514190000000006, 13.350950000000001),(52.51424, 13.350990000000001),(52.51444000000001, 13.351400000000002),(52.51453000000001, 13.351650000000001),(52.5146, 13.352200000000002),(52.51512, 13.36029),(52.51549000000001, 13.36617),(52.51567000000001, 13.369250000000001),(52.515950000000004, 13.37339),(52.51612, 13.376000000000001),(52.51615, 13.376740000000002),(52.51603000000001, 13.37682),(52.51596000000001, 13.376920000000002),(52.51585000000001, 13.37719),(52.51578000000001, 13.37733),(52.515710000000006, 13.37742),(52.515600000000006, 13.37747),(52.515480000000004, 13.37747),(52.51491000000001, 13.37738),(52.51458, 13.377360000000001),(52.514630000000004, 13.378250000000001),(52.514680000000006, 13.379040000000002),(52.51485, 13.379980000000002),(52.515150000000006, 13.381620000000002),(52.51521, 13.3823),(52.515350000000005, 13.38447),(52.515460000000004, 13.386030000000002),(52.51586, 13.38597),(52.51628, 13.385900000000001),(52.51668, 13.385860000000001),(52.51675, 13.38733),(52.51682, 13.388470000000002),(52.51688000000001, 13.3892),(52.51690000000001, 13.389650000000001),(52.51699000000001, 13.39024),(52.517010000000006, 13.3907),(52.51711, 13.392230000000001),(52.51717000000001, 13.392970000000002),(52.51724, 13.39333),(52.51731, 13.39413),(52.517340000000004, 13.394860000000001),(52.517430000000004, 13.39628),(52.517500000000005, 13.397430000000002),(52.51762, 13.398850000000001),(52.517720000000004, 13.39943),(52.517790000000005, 13.39971),(52.517900000000004, 13.400020000000001),(52.51796, 13.400260000000001),(52.51803, 13.400490000000001),(52.518640000000005, 13.4021),(52.51887000000001, 13.40262),(52.519000000000005, 13.40295),(52.51939, 13.4037),(52.519890000000004, 13.404660000000002),(52.520010000000006, 13.404950000000001)]
pb = LineString([(x,y) for y,x in path_potsdam_berlin])
# 1 grad sind ca 111km => entfernung von 1km = 0.01
pb.buffer(.02)
m = MultiPoint(list(zip(gas_stations_df['Long'],gas_stations_df['Lat'])))
pb.buffer(.02).intersection(m)
```
Keep a data set that is indexed by postion
```
def hash_pos(lat,long):
return str(lat) + ':' + str(long)
gas_station_pos_index = gas_stations_df.copy()
gas_station_pos_index['str_pos'] = gas_station_pos_index.apply(lambda row: hash_pos(row.Lat,row.Long), axis=1)
gas_station_pos_index = gas_station_pos_index.reset_index().set_index('str_pos')
gas_stations_near_path = [hash_pos(point.y,point.x) for point in pb.buffer(.02).intersection(m) ]
gas_station_pos_index.loc[gas_stations_near_path]['id']
```
### Find the point on the path closest to a gas station
```
gas_stations = pb.buffer(.02).intersection(m)
gas_stations[0].union(pb)
def closest_point_on_path(path,point):
return path.interpolate(path.project(point))
def length_on_line(path,point):
return path.project(point,normalized=True)
closest_point_on_path(pb,gas_stations[0])
length_on_line(pb,gas_stations[0])
gas_stations[-1].union(pb)
MultiPoint([closest_point_on_path(pb,p) for p in gas_stations])
pb.length * 111
[length_on_line(pb,p) for p in gas_stations]
```
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
import os
!pip install wget
!apt-get install sox
!git clone https://github.com/NVIDIA/NeMo.git
os.chdir('NeMo')
!bash reinstall.sh
!pip install unidecode
```
# **SPEAKER RECOGNITION**
Speaker Recognition (SR) is an broad research area which solves two major tasks: speaker identification (who is speaking?) and speaker verification (is the speaker who she claims to be?). In this work, we focmus on the far-field, text-independent speaker recognition when the identity of the speaker is based on how speech is spoken, not necessarily in what is being said. Typically such SR systems operate on unconstrained speech utterances,
which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in automatic speech recognition (ASR) and speech synthesis.
As the goal of most speaker related systems is to get good speaker level embeddings that could help distinguish from other speakers, we shall first train these embeddings in end-to-end manner optimizing the [QuatzNet](https://arxiv.org/abs/1910.10261) based encoder model on cross-entropy loss. We modify the original quartznet based decoder to get these fixed size embeddings irrespective of the length of the input audio. We employ mean and variance based statistics pooling method to grab these embeddings.
In this tutorial we shall first train these embeddings on speaker related datasets and then get speaker embeddings from a pretrained network for a new dataset. Since Google Colab has very slow read-write speeds, Please run this locally for training on [hi-mia](https://arxiv.org/abs/1912.01231).
We use the [get_hi-mia-data.py](https://github.com/NVIDIA/NeMo/blob/master/scripts/get_hi-mia_data.py) script to download the necessary files, extract them, also re-sample to 16Khz if any of these samples are not at 16Khz. We do also provide scripts to score these embeddings for a speaker-verification task like hi-mia dataset at the end.
```
data_dir = 'scripts/data/'
!mkdir $data_dir
# Download and process dataset. This will take a few moments...
!python scripts/get_hi-mia_data.py --data_root=$data_data
```
After download and conversion, your `data` folder should contain directories with manifest files as:
* `data/<set>/train.json`
* `data/<set>/dev.json`
* `data/<set>/{set}_all.json`
Also for each set we also create utt2spk files, these files later would be used in PLDA training.
Each line in manifest file describes a training sample - `audio_filepath` contains path to the wav file, `duration` it's duration in seconds, and `label` is the speaker class label:
`{"audio_filepath": "<absolute path to dataset>/data/train/SPEECHDATA/wav/SV0184/SV0184_6_04_N3430.wav", "duration": 1.22, "label": "SV0184"}`
`{"audio_filepath": "<absolute path to dataset>/data/train/SPEECHDATA/wav/SV0184/SV0184_5_03_F2037.wav", duration": 1.375, "label": "SV0184"}`
Import necessary packages
```
from ruamel.yaml import YAML
import nemo
import nemo.collections.asr as nemo_asr
import copy
from functools import partial
```
# Building Training and Evaluation DAGs with NeMo
Building a model using NeMo consists of
1. Instantiating the neural modules we need
2. specifying the DAG by linking them together.
In NeMo, the training and inference pipelines are managed by a NeuralModuleFactory, which takes care of checkpointing, callbacks, and logs, along with other details in training and inference. We set its log_dir argument to specify where our model logs and outputs will be written, and can set other training and inference settings in its constructor. For instance, if we were resuming training from a checkpoint, we would set the argument checkpoint_dir=`<path_to_checkpoint>`.
Along with logs in NeMo, you can optionally view the tensorboard logs with the create_tb_writer=True argument to the NeuralModuleFactory. By default all the tensorboard log files will be stored in {log_dir}/tensorboard, but you can change this with the tensorboard_dir argument. One can load tensorboard logs through tensorboard by running tensorboard --logdir=`<path_to_tensorboard dir>` in the terminal.
```
exp_name = 'quartznet3x2_hi-mia'
work_dir = './myExps/'
neural_factory = nemo.core.NeuralModuleFactory(
log_dir=work_dir+"/hi-mia_logdir/",
checkpoint_dir="./myExps/checkpoints/" + exp_name,
create_tb_writer=True,
random_seed=42,
tensorboard_dir=work_dir+'/tensorboard/',
)
```
Now that we have our neural module factory, we can specify our **neural modules and instantiate them**. Here, we load the parameters for each module from the configuration file.
```
from nemo.utils import logging
yaml = YAML(typ="safe")
with open('examples/speaker_recognition/configs/quartznet_spkr_3x2x512_xvector.yaml') as f:
spkr_params = yaml.load(f)
sample_rate = spkr_params["sample_rate"]
time_length = spkr_params.get("time_length", 8)
logging.info("max time length considered for each file is {} sec".format(time_length))
```
Instantiating train data_layer using config arguments. `labels = None` automatically creates output labels from manifest files, if you would like to pass those speaker names you can use the labels option. So while instantiating eval data_layer, we can use pass labels to the class in order to match same the speaker output labels as we have in the training data layer. This comes in handy while training on multiple datasets with more than one manifest file.
```
train_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
train_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["train"])
del train_dl_params["train"]
del train_dl_params["eval"]
batch_size=64
data_layer_train = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=data_dir+'/train/train.json',
labels=None,
batch_size=batch_size,
time_length=time_length,
**train_dl_params,
)
eval_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
eval_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["eval"])
del eval_dl_params["train"]
del eval_dl_params["eval"]
data_layer_eval = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=data_dir+'/train/dev.json",
labels=data_layer_train.labels,
batch_size=batch_size,
time_length=time_length,
**eval_dl_params,
)
data_preprocessor = nemo_asr.AudioToMelSpectrogramPreprocessor(
sample_rate=sample_rate, **spkr_params["AudioToMelSpectrogramPreprocessor"],
)
encoder = nemo_asr.JasperEncoder(**spkr_params["JasperEncoder"],)
decoder = nemo_asr.JasperDecoderForSpkrClass(
feat_in=spkr_params["JasperEncoder"]["jasper"][-1]["filters"],
num_classes=data_layer_train.num_classes,
pool_mode=spkr_params["JasperDecoderForSpkrClass"]['pool_mode'],
emb_sizes=spkr_params["JasperDecoderForSpkrClass"]["emb_sizes"].split(","),
)
xent_loss = nemo_asr.CrossEntropyLossNM(weight=None)
```
The next step is to assemble our training DAG by specifying the inputs to each neural module.
```
audio_signal, audio_signal_len, label, label_len = data_layer_train()
processed_signal, processed_signal_len = data_preprocessor(input_signal=audio_signal, length=audio_signal_len)
encoded, encoded_len = encoder(audio_signal=processed_signal, length=processed_signal_len)
logits, _ = decoder(encoder_output=encoded)
loss = xent_loss(logits=logits, labels=label)
```
We would like to be able to evaluate our model on the dev set, as well, so let's set up the evaluation DAG.
Our evaluation DAG will reuse most of the parts of the training DAG with the exception of the data layer, since we are loading the evaluation data from a different file but evaluating on the same model. Note that if we were using data augmentation in training, we would also leave that out in the evaluation DAG.
```
audio_signal_test, audio_len_test, label_test, _ = data_layer_eval()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test
)
encoded_test, encoded_len_test = encoder(audio_signal=processed_signal_test, length=processed_len_test)
logits_test, _ = decoder(encoder_output=encoded_test)
loss_test = xent_loss(logits=logits_test, labels=label_test)
```
# Creating CallBacks
We would like to be able to monitor our model while it's training, so we use callbacks. In general, callbacks are functions that are called at specific intervals over the course of training or inference, such as at the start or end of every n iterations, epochs, etc. The callbacks we'll be using for this are the SimpleLossLoggerCallback, which reports the training loss (or another metric of your choosing, such as \% accuracy for speaker recognition tasks), and the EvaluatorCallback, which regularly evaluates the model on the dev set. Both of these callbacks require you to pass in the tensors to be evaluated--these would be the final outputs of the training and eval DAGs above.
Another useful callback is the CheckpointCallback, for saving checkpoints at set intervals. We create one here just to demonstrate how it works.
```
from nemo.collections.asr.helpers import (
monitor_classification_training_progress,
process_classification_evaluation_batch,
process_classification_evaluation_epoch,
)
from nemo.utils.lr_policies import CosineAnnealing
train_callback = nemo.core.SimpleLossLoggerCallback(
tensors=[loss, logits, label],
print_func=partial(monitor_classification_training_progress, eval_metric=[1]),
step_freq=1000,
get_tb_values=lambda x: [("train_loss", x[0])],
tb_writer=neural_factory.tb_writer,
)
callbacks = [train_callback]
chpt_callback = nemo.core.CheckpointCallback(
folder="./myExps/checkpoints/" + exp_name,
load_from_folder="./myExps/checkpoints/" + exp_name,
step_freq=1000,
)
callbacks.append(chpt_callback)
tagname = "hi-mia_dev"
eval_callback = nemo.core.EvaluatorCallback(
eval_tensors=[loss_test, logits_test, label_test],
user_iter_callback=partial(process_classification_evaluation_batch, top_k=1),
user_epochs_done_callback=partial(process_classification_evaluation_epoch, tag=tagname),
eval_step=1000, # How often we evaluate the model on the test set
tb_writer=neural_factory.tb_writer,
)
callbacks.append(eval_callback)
```
Now that we have our model and callbacks set up, how do we run it?
Once we create our neural factory and the callbacks for the information that we want to see, we can start training by simply calling the train function on the tensors we want to optimize and our callbacks! Since this notebook is for you to get started, by an4 as dataset is small it would quickly get higher accuracies. For better models use bigger datasets
```
# train model
num_epochs=25
N = len(data_layer_train)
steps_per_epoch = N // batch_size
logging.info("Number of steps per epoch {}".format(steps_per_epoch))
neural_factory.train(
tensors_to_optimize=[loss],
callbacks=callbacks,
lr_policy=CosineAnnealing(
num_epochs * steps_per_epoch, warmup_steps=0.1 * num_epochs * steps_per_epoch,
),
optimizer="novograd",
optimization_params={
"num_epochs": num_epochs,
"lr": 0.02,
"betas": (0.95, 0.5),
"weight_decay": 0.001,
"grad_norm_clip": None,
}
)
```
Now that we trained our embeddings, we shall extract these embeddings using our pretrained checkpoint present at `checkpoint_dir`. As we can see from the neural architecture, we extract the embeddings after the `emb1` layer.

Now use the test manifest to get the embeddings. As we saw before, let's create a new `data_layer` for test. Use previously instiated models and attach the DAGs
```
eval_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
eval_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["eval"])
del eval_dl_params["train"]
del eval_dl_params["eval"]
eval_dl_params['shuffle'] = False # To grab the file names without changing data_layer
test_dataset = data_dir+'/test/test_all.json',
data_layer_test = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=test_dataset,
labels=None,
batch_size=batch_size,
**eval_dl_params,
)
audio_signal_test, audio_len_test, label_test, _ = data_layer_test()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test)
encoded_test, _ = encoder(audio_signal=processed_signal_test, length=processed_len_test)
_, embeddings = decoder(encoder_output=encoded_test)
```
Now get the embeddings using neural_factor infer command, that just does forward pass of all our modules. And save our embeddings in `<work_dir>/embeddings`
```
import numpy as np
import json
eval_tensors = neural_factory.infer(tensors=[embeddings, label_test], checkpoint_dir="./myExps/checkpoints/" + exp_name)
inf_emb, inf_label = eval_tensors
whole_embs = []
whole_labels = []
manifest = open(test_dataset, 'r').readlines()
for line in manifest:
line = line.strip()
dic = json.loads(line)
filename = dic['audio_filepath'].split('/')[-1]
whole_labels.append(filename)
for idx in range(len(inf_label)):
whole_embs.extend(inf_emb[idx].numpy())
embedding_dir = './myExps/embeddings/'
if not os.path.exists(embedding_dir):
os.mkdir(embedding_dir)
filename = os.path.basename(test_dataset).split('.')[0]
name = embedding_dir + filename
np.save(name + '.npy', np.asarray(whole_embs))
np.save(name + '_labels.npy', np.asarray(whole_labels))
logging.info("Saved embedding files to {}".format(embedding_dir))
!ls $embedding_dir
```
# Cosine Similarity Scoring
Here we provide a script scoring on hi-mia whose trial file has structure `<speaker_name1> <speaker_name2> <target/nontarget>` . First copy the `trails_1m` file present in test folder to our embeddings directory
```
!cp $data_dir/test/trails_1m $embedding_dir/
```
the below command would output the EER% based on cosine similarity score
```
!python examples/speaker_recognition/hi-mia_eval.py --data_root $embedding_dir --emb $embedding_dir/test_all.npy --emb_labels $embedding_dir/test_all_labels.npy --emb_size 1024
```
# PLDA Backend
To finetune our speaker embeddings further, we used kaldi PLDA scripts to train PLDA and evaluate as well. so from this point going forward, please make sure you installed kaldi and was added to your path as KALDI_ROOT.
To train PLDA, we can either use dev set or training set. Let's use the training set embeddings to train PLDA and further use this trained PLDA model to score in test embeddings. in order to do that we should get embeddings for our training data as well. As similar to above steps, generate the train embeddings
```
test_dataset = data_dir+'/train/train.json',
data_layer_test = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=test_dataset,
labels=None,
batch_size=batch_size,
**eval_dl_params,
)
audio_signal_test, audio_len_test, label_test, _ = data_layer_test()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test)
encoded_test, _ = encoder(audio_signal=processed_signal_test, length=processed_len_test)
_, embeddings = decoder(encoder_output=encoded_test)
eval_tensors = neural_factory.infer(tensors=[embeddings, label_test], checkpoint_dir="./myExps/checkpoints/" + exp_name)
inf_emb, inf_label = eval_tensors
whole_embs = []
whole_labels = []
manifest = open(test_dataset, 'r').readlines()
for line in manifest:
line = line.strip()
dic = json.loads(line)
filename = dic['audio_filepath'].split('/')[-1]
whole_labels.append(filename)
for idx in range(len(inf_label)):
whole_embs.extend(inf_emb[idx].numpy())
if not os.path.exists(embedding_dir):
os.mkdir(embedding_dir)
filename = os.path.basename(test_dataset).split('.')[0]
name = embedding_dir + filename
np.save(name + '.npy', np.asarray(whole_embs))
np.save(name + '_labels.npy', np.asarray(whole_labels))
logging.info("Saved embedding files to {}".format(embedding_dir))
```
As part of kaldi necessary files we need `utt2spk` \& `spk2utt` file to get ark file for PLDA training. to do that, copy the generated utt2spk file from `data_dir` train folder to create spk2utt file using
`utt2spk_to_spk2utt.pl $data_dir/train/utt2spk > $embedding_dir/spk2utt`
Then run the below python script to get EER score using PLDA backend scoring. This script does both data preparation for kaldi followed by PLDA scoring.
```
!python examples/speaker_recognition/kaldi_plda.py --root $embedding_dir --train_embs $embedding_dir/train.npy --train_labels $embedding_dir/train_labels.npy
--eval_embs $embedding_dir/all_embs_himia.npy --eval_labels $embedding_dir/all_ids_himia.npy --stage=1
```
Here `--stage = 1` trains PLDA model but if you already have a trained PLDA then you can directly evaluate on it by `--stage=2` option.
This should output an EER of 6.32% with minDCF: 0.455
# Performance Improvement
To improve your embeddings performance:
* Add more data and Train longer (100 epochs)
* Try adding the augmentation –see config file
* Use larger model
* Train on several GPUs and use mixed precision (on NVIDIA Volta and Turing GPUs)
* Start with pre-trained checkpoints
| github_jupyter |
# Jupyter UX Survey 2015 - Initial Sandbox
* Goal: Start looking at how we can surface insights from the data.
* Description: https://github.com/jupyter/surveys/tree/master/surveys/2015-12-notebook-ux
* Data: https://raw.githubusercontent.com/jupyter/surveys/master/surveys/2015-12-notebook-ux/20160115235816-SurveyExport.csv
## Initial Questions
### To what audiences is the Jupyter Community trying to cater?
* New to the practice of "data science"
* Experienced audience not using jupyter
* Existing audience
### How can we boil down the free text to "themes"?
* Remove stop words and find key terms to do some frequency counts
* Read and tag everything manually, then analyze the tags
* Overlap between responses to the various questions
* Apply coccurence grouping to the text
* Throw text at the alchemy Keyword Extraction API and see what it pulls out
* Bin short vs long and analyze separately
### What roles do the survey respondant fill? And in what fields / industries do they fill those roles?
See the [Roles]() section down below.
### Generally, what themes do we see across the free-text responses?
See the [Themes]() in hinderances section for an initial approach on how to find and expand on sets of themes for one particular question. We think we can apply this to the other questions as well.
### What themes do we see across the free-text responses but within the role/industry categories?
e.g., Is it always software developers that are asking for IDE features vs hard scientists asking for collaboration features?
We took an initial approach on rolling up the roles into a smaller set of categories. We then looked at mapping the requests for vim/emacs and ide feature to the software engineering related roles. It turns out that these requests seem to cross roles, and are not specific to software engineers. More of the responses for emacs/vim, in fact, came from respondants from the hard sciences (e.g. physicist, computational biologist, etc.)
This led us to believe that we should not assume certain roles are certain hinderances, but rather try to visualize if there are any hot-spots between roles and hinderance themes. It may turn out, we hypothesize, that the roles have little to do with the hinderances and that the themes are cross cutting. Or not.
We plan to create heatmap-like plots, one per question. On one axis we will have the role categories and on the other we will have the themes we identify within the responses for that question. After creating these plots for all questions, we'll also create similar plots where we substitute industry, years in role, # of notebook consumers, frequency of use, etc. on one of the axes and keep the themes on the other.
### What shortcodes can we use to refer to the questions?
Assume we roll up the answers into single columns:
* how_often
* how_long
* hinderance
* integrated
* how_run
* workflow_needs_addressed
* workflow_needs_not_addressed
* pleasant_aspects
* difficult_aspects
* features_changes
* first_experience_enhancements
* keywords
* role
* years_in_role
* industry
* notebook_consumers
```
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import pandas as pd
import matplotlib.pyplot as plt
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('../20160115235816-SurveyExport.csv')
df.columns
```
## Themes in the hinderances
Let's start with the hinderances question and figure out the process first. Then we can apply it to the other free form text responses (we think).
```
hinder = df['What, if anything, hinders you from making Jupyter Notebook an even more regular part of your workflow?']
```
How many non-null responses are there?
```
hinder.isnull().value_counts()
```
Clear out the nulls.
```
hinder = hinder.dropna()
```
How much did people write?
```
char_lengths = hinder.apply(lambda response: len(response))
fig, ax = plt.subplots(figsize=(11, 7))
char_lengths.hist(bins=100, ax=ax)
ax.set_title('Character count histogram')
```
We should definitely look at the longest responses. These are people who might have felt very strongly about what they were writing.
```
for row in hinder[char_lengths > 1400]:
print(row)
print()
```
Now just to get the constrast, let's look at some of the shortest responses.
```
hinder[char_lengths < 100].sample(20)
```
From reading a bunch of random samples of the shortest responses, we've got a list of ideas that we think we can search for across all of the responses in order to judge how common the themes are.
* Nothing
* UX / navigation / mobile / paradigm
* IDE / debug / editor familiarity / comfort zone / keys
* Setup / learning / getting started / perceived lack of skills
* Inertia
* Colleagues / peer pressure
* Version control / git / history / tracking / provenance
* Collaboration / export / sharing
* Integration / missing languages / extensibility
Before we do, let's look at a few "medium-length" responses too for good measure.
```
for x in list(hinder[(char_lengths < 300) & (char_lengths > 100)].sample(20)):
print(x)
print()
```
We can add a few themes to the list we created above (which we'll replicate here to keep growing it as we go, because, history):
* Nothing
* UX / navigation / mobile / paradigm
* IDE / debug / editor familiarity / comfort zone / keys
* Setup / learning / getting started / perceived lack of skills / community / documentation
* Inertia
* Colleagues / peer pressure
* Version control / git / history / tracking / provenance
* Collaboration / export / sharing / dirt simple deploy
* Customization / personalization
* Reuse / modularization
* Integration / missing languages / extensibility
```
keywords = ['git', 'version control', 'history', 'track', 'checkpoint', 'save']
def keywords_or(text):
for keyword in keywords:
if keyword in text:
return text
return None
results = hinder.map(keywords_or)
len(results.dropna())
results.dropna()
```
Moving forward, here's a semi-automatic procedure we can follow for identifying themes across questions:
1. Take a random sample of question responses
2. Write down common theme keywords
3. Search back through the responses using the theme keywords
4. Expand the set of keywords with other words seen in the search results
5. Repeat for all themes and questions
Later, we can use a fully automated topic modeling approach to validate our manually generated themes.
## Roles
We want to pull out the major roles that people self-identified as filling when they use Jupyter Notebook.
```
roles_df = df[['What is your primary role when using Jupyter Notebook (e.g., student,\xa0astrophysicist, financial modeler, business manager, etc.)?']]
roles_df = roles.dropna()
```
We're renaming the column for brevity only.
```
roles_df.columns = ['role']
```
Some basic normalization. TODO: do more later.
```
roles_df['role_norm'] = roles_df.role.str.lower()
```
For now, we're going to look at the top 20 and see what industries they support from the other columns
```
roles_df.role_norm.value_counts()
```
## Industry vs Role
```
len(df['Industry #1:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?'].dropna())
len(df['Industry #2:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?'].dropna())
len(df['Industry #3:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?'].dropna())
industry_df = df[
['Industry #1:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?',
'Industry #2:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?',
'Industry #3:What industries does your role and analytical work support (e.g., Journalism, IT, etc.)?',
'What is your primary role when using Jupyter Notebook (e.g., student,\xa0astrophysicist, financial modeler, business manager, etc.)?'
]
]
industry_df.columns = ['industry1', 'industry2', 'industry3', 'role']
industry_df = industry_df.dropna(how='all')
top_roles = roles_df.role_norm.value_counts()[:20]
top_industry_df = industry_df[industry_df.role.isin(top_roles.index)]
top_industry_df[top_industry_df.role == 'data analyst']
```
## Example: Software Engineering Role
We want to see if software engineers (or related roles) are the ones asking about IDE-like features.
```
software_roles = ['engineer', 'software engineer', 'developer', 'software developer', 'programmer']
role_hinder_df = pd.merge(roles_df, hinder_df, left_index=True, right_index=True)
role_hinder_df = role_hinder_df.ix[:, 1:]
role_hinder_df[role_hinder_df.role_norm.isin(software_roles)]
tmp_df = role_hinder_df.dropna()
tmp_df[tmp_df.ix[:, 1].str.contains('emacs|vim', case=False)]
tmp_df[tmp_df.ix[:, 1].str.contains('\W+ide\W+', case=False)]
```
## Years in Role vs Role Name
```
years_in_role = df.ix[:, 32]
years_in_role.value_counts()
how_long = df.ix[:, 5]
how_long.value_counts()
using_vs_role = df[[5, 32]]
using_vs_role.columns = ['how_long_using', 'how_long_role']
pd.crosstab(using_vs_role.how_long_role, using_vs_role.how_long_using)
```
| github_jupyter |
# Objective
Build a binary classifier that given a sequence of lap times will predict if a pit-stop will happen or not the next lap .. in other words I call this project End-of-Stint-or-NOT
Data Source:
- Ergast Developer API: https://ergast.com/mrd/
## Table of Content:
* [Data Preparation](#Section1)
* [Import data](#section_1_1)
* [Pit Stop Table Transformation](#section_1_2)
* [Lap Times Table Transformation](#section_1_3)
* [Left Join New Pit-Stop with New Lap-Times](#section_1_4)
* [TBC](#section_1_5)
* [TBC](#section_1_6)
## Data Preparation <a class="anchor" id="Section1"></a>
```
import pandas as pd
import numpy as np
```
### Import Data <a class="anchor" id="section_1_1"></a>
```
laps_master = pd.read_csv('data/lap_times.csv')
races_master = pd.read_csv('data/races.csv')
quali_master = pd.read_csv('data/qualifying.csv')
drivers_master = pd.read_csv('data/drivers.csv')
constructors_master = pd.read_csv('data/constructors.csv')
results_master = pd.read_csv('data/results.csv')
circuits_master = pd.read_csv('data/circuits.csv')
pits_master = pd.read_csv('data/pit_stops.csv')
pits_master
```
### Pit Stop Table Transformation <a class="anchor" id="section_1_2"></a>
Create new data frame with a list of laps when a pit stop was occuring for each driver, for each race
```
pits_df_new = pits_master.groupby(['raceId', 'driverId'])['lap'].apply(list).reset_index(name='laps_when_pitstop')
pits_df_new
```
#### Preview the lap times table
Let's take a look at a random race, and random driver, and see how the lap times look .. just to better understand what transformation needs to be done on the data
```
laps_master[laps_master.raceId == 841][laps_master.driverId == 17]
```
### Lap Times Table Transformation <a class="anchor" id="section_1_3"></a>
Create a new data frame containing a list of all the lap times in one row, for an entire race, for each driver
```
laps_df_new = laps_master.groupby(['raceId', 'driverId'])['milliseconds'].apply(list).reset_index(name='race_lap_times')
laps_df_new
```
### Left Join New Pit-Stop Table with New Lap-Times Table <a class="anchor" id="section_1_4"></a>
```
merged = pd.merge(pits_df_new, laps_df_new, on=['raceId', 'driverId'], how='left')
merged
```
### Lap Times Before Pit-Stop Sequence Partitioning <a class="anchor" id="section_1_5"></a>
```
def partition_lapTime_into_sequences(pitStop_laps, race_lapTimes):
# NOTE: no need to return the last stint, since it is not followed by a pit stop...
# only return sequence of lap times that are followed by a pit stop
# returns: list of lap time sequences (which as lists) ... so list of lists
# remove pit stops from first lap... those occur because of a collision, so they should not be looked at when predicting the end of the stint
if 1 in pitStop_laps:
pitStop_laps = pitStop_laps[1:] # remove first lap pit stop, as it was not a regular, planned one
race_lapTimes = race_lapTimes[1:] # remove the first lap time, since the stint was "corrupted" by the emergency pitstop
pitStop_laps[:] = [x - 1 for x in pitStop_laps] # subtract one lap from the pit-stop lap count, to account for the first lap being removed
if len(pitStop_laps) < 1:
return np.nan # no real stints have occured. Pitted on lap 1, then never pitted again during the race.
sequences = []
prev_pit = pitStop_laps[0]
if len(pitStop_laps) == 1: # if the race is a one-stop race
sequences.append(race_lapTimes[:prev_pit-1]) # the off-by-one accounts for not taking into consideration the lap with the pit-stop as part of the sequence
else: # multi-stop race as
for current_pit in pitStop_laps:
if current_pit == prev_pit: # this is only true when prev_pit = pitStop_laps[0]
sequences.append(race_lapTimes[:current_pit-1]) # create first stint
# the off-by-one accounts for not taking into consideration the lap with the pit-stop as part of the sequence
else:
sequences.append(race_lapTimes[prev_pit:current_pit-1]) # create next sequence from (prev-pit-lap, current_pit-lap)
prev_pit = current_pit # update pointer to previous pit ... this will be needed for the next pit
return sequences
```
### Sequencing Function Test cases
```
sample_input_pits = merged.iloc[13, :].laps_when_pitstop
sample_input_lapTimes = merged.iloc[13, :].race_lap_times
print("input pits: ", sample_input_pits)
print("input laps: ", sample_input_lapTimes)
print("output: ", partition_lapTime_into_sequences(sample_input_pits, sample_input_lapTimes))
```
To DO: write test cases
### Get Lap Times of Final Stint (as a non-pit-stint)
```
def get_last_stint_lap_times(pitStop_laps, race_lapTimes):
# returns the last stint's lap times, since it is not followed by a pit stop .. so it is non-pit-stop stint
last_pit = pitStop_laps[-1]
return race_lapTimes[last_pit:]
```
### Test get_last_stint_lap_times function
```
sample_input_pits = merged.iloc[13, :].laps_when_pitstop
sample_input_lapTimes = merged.iloc[13, :].race_lap_times
print("input pits: ", sample_input_pits)
print("input laps: ", sample_input_lapTimes)
print("output: ", get_last_stint_lap_times(sample_input_pits, sample_input_lapTimes))
sample_input_pits = merged.iloc[1, :].laps_when_pitstop
sample_input_lapTimes = merged.iloc[1, :].race_lap_times
print("input pits: ", sample_input_pits)
print("input laps: ", sample_input_lapTimes)
print("output: ", get_last_stint_lap_times(sample_input_pits, sample_input_lapTimes))
```
### Apply sequence partitioning function the merged data set
```
merged['stints'] = merged.apply(lambda x: partition_lapTime_into_sequences(x.laps_when_pitstop, x.race_lap_times), axis=1)
merged['last_stint'] = merged.apply(lambda x: get_last_stint_lap_times(x.laps_when_pitstop, x.race_lap_times), axis=1)
merged
```
Check if there are any missing stints
```
merged.isnull().sum()
```
There are some missing values based on the sequence partitioning transformation that we have just applied. Let's see where they are.
```
merged[merged.isnull().any(axis=1)]
```
As I have thought, all cases are just races when there was only one pit stop, on lap 1, so for the scope of this end-of-stint classifier we can safely remove theses cases, as they do not affect the task at hand
```
merged = merged.dropna()
merged
end_of_stint_sequences = merged['stints']
end_of_stint_sequences[0]
last_stint_sequences = merged['last_stint']
last_stint_sequences[0]
```
We need to flatten the structure of the data. We need a list of lists, not a Pandas Series of lists of lists
```
temp = end_of_stint_sequences.tolist() # lists of lists of lists
print("Before:", temp[0:3])
print()
# Use list.extend() to convert a a 3D list to a 2D lists
end_of_stint_sequences = []
for elem in temp:
end_of_stint_sequences.extend(elem) # this will make it lists of lists
print("After:", end_of_stint_sequences[0:3])
print("Sample Size = ", len(end_of_stint_sequences))
```
### Generate not end of stint sequences --- this method did not work
My logic here is the following: Don't generate random laptimes, nor stints with random length.
What I propose is: remove the last n laps from a real stint, and label it as a 'not-end-of-stint' kind of a sequence.
The parameter n needs to be experimented with: we need to figure out what kind of experiment setup works best for our binary classifier.
- Initially what I am thinking is that I will remove the last lap, and create some fake samples... then remove the last 2 laps, and the last 4 laps, and create samples out of those too.
- What I want to make sure is to not create a very unbalanced data set. What I am aiming for is 20-25% end-of-stint data, with 75-80% not-end-of stint data to comprise my data set which I will use to train my binary classifier.
```
def remove_lastN_elements(arr, N):
return arr[:-N]
print(end_of_stint_sequences[0])
print()
print(remove_lastN_elements(end_of_stint_sequences[0], 2))
NOT_end_of_stint_sequences = []
# N needs to be experimented with. Initially I chose N=1, N=2, and N=4
for lst in end_of_stint_sequences:
temp_list = remove_lastN_elements(lst, N = 1) # remove last lap from each stint
NOT_end_of_stint_sequences.append(temp_list)
temp_list = remove_lastN_elements(lst, N = 2) # remove last 2 laps from each stint
NOT_end_of_stint_sequences.append(temp_list)
temp_list = remove_lastN_elements(lst, N = 4) # remove last 4 laps from each stint
NOT_end_of_stint_sequences.append(temp_list)
#print(len(NOT_end_of_stint_sequences))
print(len(end_of_stint_sequences))
```
RESULT = 3:1 ratio between not-end-of-stint and end-os-stint data
Let's create the labels:
```
end_of_stint_labels = [1] * len(end_of_stint_sequences)
NOT_end_of_stint_labels = [0] * len(NOT_end_of_stint_sequences)
```
### Get NOT-end-of-stint sequences & Create final data set
```
NOT_end_of_stint_sequences = last_stint_sequences.tolist()
NOT_end_of_stint_labels = [0] * len(NOT_end_of_stint_sequences)
end_of_stint_labels = [1] * len(end_of_stint_sequences)
print("Labels:")
print(len(NOT_end_of_stint_labels))
print(len(end_of_stint_labels))
print("\nSequences:")
print(len(NOT_end_of_stint_sequences))
print(len(end_of_stint_sequences))
stint_sequences = end_of_stint_sequences + NOT_end_of_stint_sequences
stint_labels = end_of_stint_labels + NOT_end_of_stint_labels
print(len(stint_sequences))
print(len(stint_labels))
```
## Binary Classifier
I view this task as a Sequence Classification task, where deep learning approaches have been widely used in practice for similar tasks such as:
- DNA Sequence Classification: Given a DNA sequence of ACGT values, predict whether the sequence codes for a coding or non-coding region.
- Anomaly Detection: Given a sequence of observations, predict whether the sequence is anomalous or not.
- Sentiment Analysis: Given a sequence of text such as a review or a tweet, predict whether sentiment of the text is positive or negative.
Reference: https://machinelearningmastery.com/sequence-prediction/
I have done some research on the problem and the most common approaches seem to be using LSTM (Long-Short-Term-Memory) Recurrent Neural Networks. In the upcoming subsections I will test various architectures of different LSTM and maybe even non-LSTM Recurrent Neural Networks to see some results, then evaluate if we need to use some other binary classifier, or we actually just need more data, or better data, or we need to apply some techniques used when working with imbalaced data (undersampling, oversampling).
### Split data into train and test sets
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(stint_sequences, stint_labels, test_size=0.20, random_state=7)
print("Train set:", len(X_train))
print("Test set:", len(X_test))
print("Train labels:", len(y_train))
print("Test labels:", len(y_test))
```
### Pad Input Sequences
```
# find out what's the longest stint in our data set
#max_stint_length = max(map(len, end_of_stint_sequences))
#print("Max stint-length =", max_stint_length)
max_stint_length = 30
from keras.preprocessing import sequence
X_train = sequence.pad_sequences(X_train, maxlen=max_stint_length, padding="pre", truncating='pre')
X_test = sequence.pad_sequences(X_test, maxlen=max_stint_length, padding="pre", truncating='pre')
```
Wrap every list into numpy arrays, so Keras can process the input
```
X_train = np.array(X_train)
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = np.array(X_test)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
y_train = np.array(y_train)
y_train = y_train.reshape(y_train.shape[0], 1)
y_test = np.array(y_test)
y_test = y_test.reshape(y_test.shape[0], 1)
```
testing the ratio of 0 to 1 in the train and test set
```
unique, frequency = np.unique(y_test,
return_counts = True)
# print unique values array
print("Unique Values:",
unique)
# print frequency array
print("Frequency Values:",
frequency)
unique, frequency = np.unique(y_train,
return_counts = True)
# print unique values array
print("Unique Values:",
unique)
# print frequency array
print("Frequency Values:",
frequency)
```
### Approach 1: Simple LSTM for Sequence Classification
```
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# fix random seed for reproducibility
np.random.seed(7)
model = Sequential()
model.add(LSTM(100, input_shape=(max_stint_length, 1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128)
model = Sequential()
model.add(LSTM(64, input_shape=(max_stint_length, 1), return_sequences=True))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128)
```
### Approach 2: Time Distributed LSTM
```
from keras.models import Sequential
from keras.layers import Dense, TimeDistributed, LSTM, Dropout
# fix random seed for reproducibility
np.random.seed(7)
model = Sequential()
model.add(LSTM(512, input_shape=(max_stint_length, 1), return_sequences=True))
#model.add(Dropout(0.3))
model.add(LSTM(256, return_sequences=True))
model.add(LSTM(128, return_sequences=True))
model.add(LSTM(64, return_sequences=True))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=64)
```
### Approach 3: Bidirectional LSTMs
```
from keras.layers import Bidirectional
# define LSTM model
model = Sequential()
model.add(Bidirectional(LSTM(128, return_sequences=True), input_shape=(max_stint_length, 1)))
#model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(64, return_sequences=True)))
#model.add(Dropout(0.2))
#model.add(Bidirectional(LSTM(64, return_sequences=True)))
#model.add(Bidirectional(LSTM(64)))
#model.add(Dropout(0.2))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=64)
```
### Approach 4: LSTM and CNNs combined
```
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
model = Sequential()
model.add(Conv1D(filters=256, kernel_size=3, padding='same', activation='relu', input_shape=(max_stint_length, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=128, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Bidirectional(LSTM(64, return_sequences=True)))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Nadda1004/Intro_Machine_learning/blob/main/W1_D1_ML_HeuristicModel.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Predicting Rain in Seattle
Seattle is one of the rainiest places in the world. Even so, it is worth asking the question 'will it rain tomorrow.' Imagine you are headed to sleep at a hotel in downtown Seattle.
The next days activities are supposed to include walking around outside most of the day. You want to know if it will rain or not (you don't really care how much rain just a simple yes or no will do), which will greatly impact what you choose to wear and carry around (like an umbrella).
Build a heuristic model to predict if it will rain tomorrow.
## Our Data
```
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/gumdropsteve/datasets/master/seattle_weather_1948-2017.csv')
df
df.info()
#since the ds is representing the date but its not in date time format i will convert it to datetime format
df.ds = pd.to_datetime(df['ds'])
df.info()
df.head()
import numpy as np
# what % of days did it rain?
rainy = (df.rain.value_counts()[1] / df.shape[0]) * 100
print('The Percenatge of Rained Days {:.3f}%'.format(rainy))
# what values are seen in the prcp column
df.prcp.value_counts()
import matplotlib.pyplot as plt
plt.figure(figsize=(15,7))
df.prcp.plot.hist(bins = 20).set(title = 'Values Range in Prcp');
# show me a histogram of prcp < 2
plt.figure(figsize=(15,7))
df.loc[df.prcp < 2].prcp.plot.hist(bins = 20).set(title = 'Values < 2 in Prcp');
```
#### Check for Missing Values and Outliers
```
# how many null values does each column have?
df.isnull().sum()
# show me the null rows
df.loc[df.isnull().any(axis=1)]
# drop the null rows and update the dataframe
df1 = df.dropna()
df1
import seaborn as sns
# make a box plot
plt.figure(figsize=(15,7))
sns.boxplot(data=df1).set(title = 'Boxplot for all columns');
# show me some outler values from tmax or tmin
plt.figure(figsize=(15,7))
sns.boxplot(data=[df1.tmin , df1.tmax]).set(title = 'Boxplot for minimum temperature and maximum temperature' );
plt.xlabel('Min and Max Temp');
# make an sns pairplot with hue='rain'
sns.pairplot(data = df1 , hue = 'rain');
# bonus challenge
# plot prcp by day (ds on x axis)
plt.figure(figsize=(40,10))
sns.lineplot(x = df1.ds , y = df1.prcp).set(title = 'Prcp Values Over The Years');
```
## Set up a basic model to make predicitons
First, split the data...
```
from sklearn.model_selection import train_test_split
X = df1[['prcp', 'tmax', 'tmin']] # all the values you want to help predict the target value
y = df1.rain.astype(np.int32) # the target value
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7)
```
Bring in a model now...
```
from sklearn.linear_model import LogisticRegression
# logistic regression is a classifier, for our case, True (1) or False (0)
lr = LogisticRegression()
lr
lr.fit(X=X_train, y=y_train)
# predict the y values from X test data
lr.predict(X_test)
preds = lr.predict(X_test)
# how'd your model score?
from sklearn.metrics import accuracy_score
accuracy_score(y_test, preds) * 100
```
| github_jupyter |
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---
*Note: Some of the cells in this notebook are computationally expensive. To reduce runtime, this notebook is using a subset of the data.*
# Case Study: Sentiment Analysis
### Data Prep
```
import pandas as pd
import numpy as np
# Read in the data
df = pd.read_csv('Amazon_Unlocked_Mobile.csv')
# Sample the data to speed up computation
# Comment out this line to match with lecture
df = df.sample(frac=0.1, random_state=10)
df.head()
# Drop missing values
df.dropna(inplace=True)
# Remove any 'neutral' ratings equal to 3
df = df[df['Rating'] != 3]
# Encode 4s and 5s as 1 (rated positively)
# Encode 1s and 2s as 0 (rated poorly)
df['Positively Rated'] = np.where(df['Rating'] > 3, 1, 0)
df.head(10)
# Most ratings are positive
df['Positively Rated'].mean()
from sklearn.model_selection import train_test_split
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(df['Reviews'],
df['Positively Rated'],
random_state=0)
print('X_train first entry:\n\n', X_train.iloc[0])
print('\n\nX_train shape: ', X_train.shape)
```
# CountVectorizer
```
from sklearn.feature_extraction.text import CountVectorizer
# Fit the CountVectorizer to the training data
vect = CountVectorizer().fit(X_train)
vect.get_feature_names()[::2000]
len(vect.get_feature_names())
# transform the documents in the training data to a document-term matrix
X_train_vectorized = vect.transform(X_train)
X_train_vectorized
from sklearn.linear_model import LogisticRegression
# Train the model
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
from sklearn.metrics import roc_auc_score
# Predict the transformed test documents
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
# get the feature names as numpy array
feature_names = np.array(vect.get_feature_names())
# Sort the coefficients from the model
sorted_coef_index = model.coef_[0].argsort()
# Find the 10 smallest and 10 largest coefficients
# The 10 largest coefficients are being indexed using [:-11:-1]
# so the list returned is in order of largest to smallest
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
```
# Tfidf
```
from sklearn.feature_extraction.text import TfidfVectorizer
# Fit the TfidfVectorizer to the training data specifiying a minimum document frequency of 5
vect = TfidfVectorizer(min_df=5).fit(X_train)
len(vect.get_feature_names())
X_train_vectorized = vect.transform(X_train)
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
feature_names = np.array(vect.get_feature_names())
sorted_tfidf_index = X_train_vectorized.max(0).toarray()[0].argsort()
print('Smallest tfidf:\n{}\n'.format(feature_names[sorted_tfidf_index[:10]]))
print('Largest tfidf: \n{}'.format(feature_names[sorted_tfidf_index[:-11:-1]]))
sorted_coef_index = model.coef_[0].argsort()
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
# These reviews are treated the same by our current model
print(model.predict(vect.transform(['not an issue, phone is working',
'an issue, phone is not working'])))
```
# n-grams
```
# Fit the CountVectorizer to the training data specifiying a minimum
# document frequency of 5 and extracting 1-grams and 2-grams
vect = CountVectorizer(min_df=5, ngram_range=(1,2)).fit(X_train)
X_train_vectorized = vect.transform(X_train)
len(vect.get_feature_names())
model = LogisticRegression()
model.fit(X_train_vectorized, y_train)
predictions = model.predict(vect.transform(X_test))
print('AUC: ', roc_auc_score(y_test, predictions))
feature_names = np.array(vect.get_feature_names())
sorted_coef_index = model.coef_[0].argsort()
print('Smallest Coefs:\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Largest Coefs: \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
# These reviews are now correctly identified
print(model.predict(vect.transform(['not an issue, phone is working',
'an issue, phone is not working'])))
```
| github_jupyter |
<b>Detection of Sargassum on the coast and coastal waters</b>
Notebook for classifying and analyzing Sargassum in Bonaire with Sentinel-2 images
* Decision Tree Classifier (DTC) and Maximum Likelihood Classifier (MLC) are employed
* Training sites covering 8 different classes are used to extract pixel values (training samples) over all Sentinel-2 bands
* 12 Sentinel bands and 8 spectral indices evaluated using Jeffries-Matusita distance (selected: NDVI, REP, B05 and B11)
* 80:20 train-test ratio for splitting the training samples
* K-Fold cross-validation performed for tuning the DTC model
* MLC model developed with 4 different chi-square thresholds: 0% (base), 10%,20%,50%
```
import os
import re
import pandas as pd
import numpy as np
import rasterio as rio
from rasterio import Affine
from rasterio.mask import mask
import matplotlib.pyplot as plt
import seaborn as sns
from glob import glob
import geopandas as gpd
from joblib import dump,load
from rasterstats import zonal_stats
from tqdm import tqdm,tqdm_notebook
#custom functions
from Python.prep_raster import stack_bands,clip_raster,pixel_sample,computeIndexStack
from Python.data_treat import balance_sample,down_sample
from Python.spec_analysis import transpose_df,jmd2df
from Python.data_viz import specsign_plot,jmd_heatmap,ridgePlot,validation_curve_plot
from Python.mlc import mlClassifier
from Python.calc_acc import calc_acc
from Python.pred_raster import stack2pred, dtc_pred_stack
from Python.misc import get_feat_layer_order
#sklearn functions
from sklearn.model_selection import train_test_split,validation_curve
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
#setup IO directories
parent_dir = os.path.join(os.path.abspath('..'),'objective1') #change according to preference
sub_dirs = ['fullstack','clippedstack','indexstack','predicted','stack2pred']
make_dirs = [os.makedirs(os.path.join(parent_dir,name),exist_ok=True) for name in sub_dirs]
```
<b>Sentinel-2 data preparation</b>
* Resample coarse bands to 10m resolution
* Stack multiband images
* Calculate spectral indices
```
#dates considered for classification and analysis
dates = [20180304,20180309,20180314,20180319,20190108,20190128,20190212,20190304,20190309,
20190314,20190319,20190508,20190513,20190518,20190523,20190821,20191129]
#band names
bands = ['B01_60m','B02_10m','B03_10m','B04_10m','B05_20m','B06_20m',
'B07_20m','B08_10m','B8A_20m','B09_60m','B11_20m','B12_20m']
#get product file paths according to dates and tile ID T19PEP (covers Bonaire)
level2_dir = '...' #change according to preference
level2_files = glob(level2_dir+"/*.SAFE")
scene_paths=[file for date in dates for file in level2_files if str(date) in file and 'T19PEP' in file]
#sort multiband image paths according to date
image_collection ={}
for scene in scene_paths:
date = re.findall(r"(\d{8})T", scene)[0]
#collect all .jp2 band images in SAFE directory
all_images = [f for f in glob(scene + "*/**/*.jp2", recursive=True)]
img_paths = [img_path for band in bands for img_path in all_images if band in img_path]
image_collection[date] = img_paths
#check nr. of images per date
for key in image_collection.keys():print(f'Date: {key} Images: {len(image_collection[key])}')
#stack multiband images to a geotiff (!computationaly intensive)
for date in tqdm(image_collection.keys(),position=0, leave=True):
ref10m= image_collection[date][1] #use band B02 (10m) as reference metadata
outfile = os.path.join(parent_dir,'fullstack',f'stack_{date}.tif')
stack_bands(image_collection[date],ref10m,outfile)
#crop multiband image stack and compute spectral indices
roi_file = './data/boundaries/coastline_lacbay.geojson' #polygon for cropping image
indices = ['NDVI','REP','FAI','GNDVI','NDVI_B8A','VB_FAH','SEI','SABI'] #list of indices used in the study
stack_files = glob(parent_dir + "/fullstack/*.tif")
for stack_file in tqdm(stack_files,position=0, leave=True):
filename = os.path.basename(stack_file).split('.')[0]
#cropping
clip_outfile = os.path.join(parent_dir,'clippedstack',filename+"_clipped.tif")
clip_raster(stack_file,roi_file,clip_outfile,fill=True,nodat=0)
#compute spectral indices
index_outfile = os.path.join(index_dir,filename+"_index.tif")
computeIndexStack(clip_outfile,indices,index_outfile)
```
<b>Sample pixel values from multiband images based on training sites</b>
* Training scenes from 4,9,14 and 19 March 2019
```
#get training sites and corresponding images
train_sites = [f for f in glob(r".\data\training_input\objective1\*_coast.geojson")]
dates = [20190304,20190309,20190314,20190319]
stack_bands = [f for date in dates for f in glob(parent_dir+'/clipped*/*_clipped.tif') if str(date) in f]
index_bands = [f for date in dates for f in glob(parent_dir+'/index*/*_index.tif') if str(date) in f]
#bands and indices to be sampled
band_names = ['B01','B02','B03','B04','B05','B06','B07','B08','B8A','B09','B11','B12']
indices = ['NDVI','REP','FAI','GNDVI','NDVI-B8A','VB-FAH','SEI','SABI']
dataset = []
for i in range(len(train_sites)):
#sample multibands and spectral indices
df_bands= pixel_sample(stack_bands[i],train_sites[i],band_names)
df_indices= pixel_sample(index_bands[i],train_sites[i],indices)
df_sample = pd.concat([df_bands,df_indices],axis=1)
df_sample = df_sample.loc[:,~df_sample.columns.duplicated()]
#downsample based on floating Sargassum (Sf)
df_downsampled = down_sample(df_sample,'C','Sf')
dataset.append(df_downsampled)
#final dataset
dataset=pd.concat(dataset,sort=False).reset_index(drop=True)
dataset.to_csv(r'./data/training_input/csv/training_samples_20190304_20190319_sargassum.csv',index=False)
```
<b>Expore spectral signature</b>
* Jeffries-Matusita distance (JMD) used for feature selection ([reference](https://books.google.nl/books?id=RxHbb3enITYC&pg=PA52&lpg=PA52&dq=for+one+feature+and+two+classes+the+Bhattacharyya+distance+is+given+by&source=bl&ots=sTKLGl1POo&sig=ACfU3U2s7tv0LT9vfSUat98l4L9_dyUgeg&hl=nl&sa=X&ved=2ahUKEwiKgeHYwI7lAhWIIlAKHZfJAC0Q6AEwBnoECAkQAQ#v=onepage&q&f=false))
* NDVI, REP, B05 and B11 are selected as input features for the classifiers
```
#load training sample
df = pd.read_csv('./data/training_input/csv/training_samples_20190304_20190319_sargassum.csv')
#plot spectral signature focused on 4 subclasses
specsign_plot(df,df.columns[4:16],classtype='C')
#plot JMD heatmap for each band
jmd_bands = [jmd2df(transpose_df(df,'C',band)) for band in df.columns[4:16]]
jmd_heatmap(jmd_bands)
#plot JMD heatmap for each spectral index
jmd_indices = [jmd2df(transpose_df(df,'C',band)) for band in df.columns[16:]]
jmd_heatmap(jmd_indices)
#plot distribution of selected input features
sns.set_style('white')
ridgePlot(df[['C','NDVI','REP','B05','B11']],'C')
```
<b>Build classifiers</b>
```
#load training sample
df = pd.read_csv('./data/training_input/csv/training_samples_20190304_20190319_sargassum.csv')
predictors = ['NDVI','REP','B05','B11']
subset_df = df[['C']+predictors]
#split into train and test datasets 80:20
train,test = train_test_split(subset_df, train_size = 0.8,random_state=1,shuffle=True,stratify=np.array(subset_df['C']))
train = train.sort_values(by='C',ascending=True) #sort labels
#split pedictors from labels (for DTC)
le = LabelEncoder()
X_train,y_train = train[predictors],le.fit_transform(train['C'])
X_test,y_test = test[predictors],le.fit_transform(test['C'])
```
* Decision Tree Classifier
```
#perform k-fold (=10) cross-validation
#parameters considered in this step
max_depth = np.arange(1,40,2)
min_samples_split = list(range(2, 100,10))
max_leaf_nodes= list(range(2, 50,5))
min_samples_leaf= list(range(1, 100,10))
min_impurity_decrease=[0,0.00005,0.0001,0.0002,0.0005,0.001,0.0015,0.002,0.005,0.01,0.02,0.05,0.08]
criterion = ['gini','entropy']
#assign parameters to a dictionary
params = {'max_depth':max_depth,'min_samples_split':min_samples_split,
'max_leaf_nodes':max_leaf_nodes,'min_samples_leaf':min_samples_leaf,
'min_impurity_decrease':min_impurity_decrease,'criterion':criterion}
#plot validation curve
fig,axs = plt.subplots(3,2,figsize=(10,8))
axs = axs.ravel()
dtc = DecisionTreeClassifier(random_state=1,criterion='entropy') #default model
for (param_name,param_range),i in zip(params.items(),range(len(params.items()))):
train_scores,test_scores = validation_curve(dtc,X_train.values,y_train,cv=10,scoring='accuracy',
n_jobs=-1,param_range=param_range,param_name=param_name)
validation_curve_plot(train_scores,test_scores,param_range,param_name,axs[i])
plt.show()
#train dtc model based on best parameters
dtc = DecisionTreeClassifier(max_depth=5,random_state=2,criterion='entropy',min_samples_split=70,
max_leaf_nodes=15,min_samples_leaf=40,min_impurity_decrease=0.01,max_features=4)
dtc = dtc.fit(X_train,y_train)
#export model as joblib file
dump(dtc,r".\data\models\dtc_model_sargassum.joblib")
```
* Maximum Likelihood Classifier
```
#train mlc model
mlc = mlClassifier(train,'C')
#export model as joblib file
dump(mlc,r".\data\models\mlc_model_sargassum.joblib")
```
* Compute model accuracies (based on test split)
```
#load models
dtc = load(r".\data\models\dtc_model_sargassum.joblib")
mlc = load(r".\data\models\mlc_model_sargassum.joblib")
#DTC model accuracy
dtc_y_pred = dtc.predict(X_test)
con_mat_dtc = calc_acc(le.inverse_transform(y_test),le.inverse_transform(dtc_y_pred))
con_mat_dtc['classifier'] = 'DTC'
#MLC model accuracies with chi-square threshold
chi_table = {'MLC base':None,'MLC 10%':7.78,'MLC 20%':5.99,'MLC 50%':3.36}
mlc_conmats = []
for key,value in chi_table.items():
con_mat_mlc = mlc.classify_testdata(test,'C',threshold=value)
con_mat_mlc['classifier'] = key
mlc_conmats.append(con_mat_mlc)
#export model accuracies
mlc_conmats = pd.concat(mlc_conmats)
model_acc = pd.concat([con_mat_dtc,mlc_conmats])
model_acc.to_csv('./data/output/objective1/dtc_mlc_model_acc_obj1.csv')
```
<b>Classification</b>
* create an image stack for prediction (stack2pred) for all scenes in objective1 folder
* classify each stack2pred image with the DTC and MLC models
```
#get all multiband and spectral index images
stack_bands = glob(parent_dir+'/clipped*/*_clipped.tif')
index_bands = glob(parent_dir+'/index*/*_index.tif')
#get the order of the selected predictors in the multiband and spectral index images
predictors = ['NDVI','REP','B05','B11']
used_indices, used_bands = get_feat_layer_order(predictors)
stack2pred_paths = []
#create stack2pred rasters
for band_image,index_image in zip(stack_bands,index_bands):
date = re.findall(r"(\d{8})", band_image)[0]
outfile = os.path.join(f'{parent_dir}\stack2pred',f'stack2pred_{date}.tif')
stack2pred_paths.append(outfile)
stack2pred(index_image,band_image,used_indices,used_bands,outfile)
#load models
dtc = load(r".\data\models\dtc_model_sargassum.joblib")
mlc = load(r".\data\models\mlc_model_sargassum.joblib")
#stack2pred image paths
stack2pred_paths = glob(parent_dir+'*/stack2pred/stack2pred_*.tif')
#classify all stack2pred images
for path in stack2pred_paths:
date = re.findall(r"(\d{8})", path)[0]
#predict multiple mlc with thresholds
mlc_out = f'{parent_dir}/predicted/mlc/mlc_{date}_multi.tif'
os.makedirs(os.path.dirname(mlc_out),exist_ok=True)
if not os.path.exists(mlc_out):
chi_probs = [None,7.78,5.99,3.36]
mlc_preds = np.array([mlc.classify_raster_gx(path,out_file=None,threshold=prob) for prob in chi_probs])
#export multilayer mlc image
with rio.open(path) as src:
profile = src.profile.copy()
profile.update({'dtype': rio.uint16})
with rio.open(mlc_out ,'w',**profile) as dst:
dst.write(mlc_preds.astype(rio.uint16))
#predict and export DTC raster
dtc_out = f'{parent_dir}/predicted/dtc/dtc_{date}.tif'
os.makedirs(os.path.dirname(dtc_out),exist_ok=True)
if not os.path.exists(dtc_out):
dtc_pred_stack(dtc,path,dtc_out)
```
* MLC class posterior probability raster
```
#stack2pred image paths
stack2pred_paths = glob(parent_dir+'*/stack2pred/stack2pred_*.tif')
#compute probabality raster
for path in stack2pred_paths:
mlc_prob_out = f'{parent_dir}/predicted/mlc/mlc_{date}_prob.tif'
os.makedirs(os.path.dirname(mlc_out),exist_ok=True)
mlc.prob_rasters(path,mlc_prob_out)
```
<b>External validity</b>
* Classify DTC and MLC results for a scene taken on 2019-05-18
* Validation samples only covers Non-Floating Sargassum (Non-Sf) and Floating Sargassum (Sf)
* Floating Sargassum (Sf) pixel value = 3 in the DTC and MLC rasters
```
#get file paths
val_samples = gpd.read_file(r'./data/training_input/objective1/sf_validation_20190518.geojson')
dtc_file = glob(parent_dir+'/predicted*/dtc/dtc*20190518*.tif')[0]
mlc_file = glob(parent_dir+'/predicted*/mlc/mlc*20190518*.tif')[0]
coords = [(val_samples.geometry[i][0].x,val_samples.geometry[i][0].y) for i in range(len(val_samples))]
with rio.open(dtc_file) as dtc_src, rio.open(mlc_file) as mlc_src:
#sample from dtc raster
val_samples['DTC'] = [pt[0] for pt in dtc_src.sample(coords)]
#sample from multilayer mlc raster
mlc_multi = pd.concat([pd.DataFrame(pt).T for pt in mlc_src.sample(coords)],ignore_index=True)
val_samples[['MLC base','MLC 10%','MLC 20%','MLC 50%']] = mlc_multi
#convert pixel values to 1 if Sf, else to 0 for others
val_samples[val_samples.columns[-5:]] = (val_samples[val_samples.columns[-5:]]==3).astype(int)
#compute classification (validation) accuracy
df_val = pd.DataFrame(val_samples.drop(columns='geometry'))
acc_val_dfs = []
for pred in df_val.columns[df_val.columns!='label']:
acc = calc_acc(df_val['label'].values, df_val[pred].values)
acc['classifier'] = pred
acc_val_dfs.append(acc)
acc_val_dfs = pd.concat(acc_val_dfs)
acc_val_dfs.to_csv('./data/output/objective1/dtc_mlc_external_val_obj1.csv')
```
* Plot model and validation accuracies
```
model_df = pd.read_csv('./data/output/objective1/dtc_mlc_model_acc_obj1.csv').set_index('Model')
val_df = pd.read_csv('./data/output/objective1/dtc_mlc_external_val_obj1.csv').set_index('Observed')
acc2plot = {'Model accuracy (8 classes)':model_df.loc['PA','UA'].str[:4].astype(float),
'Model F1-score (Sf)':model_df.loc['Sf','F1-score'].astype(float),
'Validation accuracy (2 classes)':val_df.loc['PA','UA'].str[:4].astype(float),
'Validation F1-score (Sf)':val_df.loc['1','F1-score'].astype(float)}
[plt.plot(val_df['classifier'].unique(),value,label=key) for key,value in acc2plot.items()]
plt.legend()
```
<b>Comparative analysis</b>
* Compare Sargassum (Sf and Sl) classified area across different scenes for each model
* Persisting missclassification occur between the two Sargassum classes and other coastal features, hence a mask was applied.
```
#get classification result paths
dtc_paths = glob(parent_dir+'/predicted*/dtc/dtc*.tif')
mlc_paths = glob(parent_dir+'/predicted*/mlc/mlc*.tif')
#load mask
sl_mask = [gpd.read_file('./data/boundaries/sf_sl_mask.geojson').__geo_interface__['features'][0]['geometry']]
sf_mask = [gpd.read_file('./data/boundaries/sf_sl_mask.geojson').__geo_interface__['features'][1]['geometry']]
#collection of Sargassum classification results
data = dict.fromkeys(['Date','Sl MLC Base','Sl MLC 10%','Sl MLC 20%','Sl MLC 50%','Sl DTC',
'Sf MLC Base','Sf MLC 10%','Sf MLC 20%','Sf MLC 50%','Sf DTC'], [])
for i in range(len(mlc_paths)):
date = re.findall(r"(\d{8})", mlc_paths[i])
data['Date'] = data['Date']+ [str(pd.to_datetime(date)[0].date())]
with rio.open(dtc_paths[i]) as dtc_src, rio.open(mlc_paths[i]) as mlc_src:
#sf pixel count
dtc_img= mask(dataset=dtc_src,shapes=sf_mask,nodata=dtc_src.nodata,invert=True)[0]
data['Sf DTC'] = data['Sf DTC']+[np.unique(dtc_img, return_counts=True)[1][2]]
mlc_imgs= mask(dataset=mlc_src,shapes=sf_mask,nodata=mlc_src.nodata,invert=True)[0]
for k,sf_mlc_key in enumerate(list(data.keys())[6:-1]):
data[sf_mlc_key] = data[sf_mlc_key]+ [[np.unique(mlc_img, return_counts=True)[1][2] for mlc_img in mlc_imgs][k]]
#sl pixel count
dtc_img= mask(dataset=dtc_src,shapes=sl_mask,nodata=dtc_src.nodata,invert=False)[0]
data['Sl DTC'] = data['Sl DTC']+[np.unique(dtc_img, return_counts=True)[1][3]]
mlc_imgs= mask(dataset=mlc_src,shapes=sl_mask,nodata=mlc_src.nodata,invert=False)[0]
for j,sl_mlc_key in enumerate(list(data.keys())[1:5]):
data[sl_mlc_key] = data[sl_mlc_key]+[[np.unique(mlc_img, return_counts=True)[1][3] for mlc_img in mlc_imgs][j]]
#export data
data = pd.DataFrame(data)
data.to_csv('./data/output/objective1/classified_area_obj1.csv',index=False)
```
* Plot Sargassum classified area in 2019
```
#load data and subset only the 2019 results
data = pd.read_csv('./data/output/objective1/classified_area_obj1.csv',index_col='Date')[4:]
#plot Floating Sargassum (Sf) and Sargassum on land (Sl)
fig,axs = plt.subplots(1,2,figsize=(20,8))
axs[0].set_ylabel('Classified area (ha)')
plt.tight_layout()
fig.autofmt_xdate()
plots = [axs[0].plot(data[col]/100) if 'Sf' in col else axs[1].plot(data[col]/100) for col in data.columns]
legends = axs[0].legend(data.columns[:5],loc='upper right'),axs[1].legend(data.columns[5:],loc='upper right')
```
<b>Sargassum coverage maps</b>
* Compute Sargassum coverage maps for the invasions in March and May 2019 and March 2018
* A 20mx20m grid was used to calculate the coverage for each scene
* MLC 20% results were used for Floating Sargassum (Sf) coverage map
* MLC 50% results were used for Sargassum on land (Sl) coverage map
* Note that code below takes about 10 minutes to run (due to small grid tile size)
```
#get classification result paths
mlc_paths = glob(parent_dir+'/predicted*/mlc/mlc*03*.tif')+glob(parent_dir+'/predicted*/mlc/mlc*05*.tif')
#load mask and grid data
mask_data = gpd.read_file('./data/boundaries/objective1/sf_sl_mask.geojson').__geo_interface__['features']
grid_file = gpd.read_file(r'./data/boundaries/objective1/20mgrid.geojson')
#collect geodataframes
data = []
for mlc_file in mlc_paths:
date = re.findall(r"(\d{8})", mlc_file)[0]
with rio.open(mlc_file) as src:
#iterate according to mask data (first item = sl, second item = sf)
#count number of pixel in each grid tile (computationaly intensive!)
for feat,label,val,inv,model in zip(mask_data,['sl','sf'],[4,3],[False,True],[3,2]):
img = mask(dataset=src,shapes=[feat['geometry']],nodata=src.nodata,invert=inv)[0][model]
zs = zonal_stats(grid_file,np.where(img==val,1,0),affine=src.transform,
prefix=f'{label}_{date}_',stats='count',geojson_out=True,nodata=0)
zs_filter = list(filter(lambda x: x['properties'][f'{label}_{date}_count']!=0, zs))
data.append(gpd.GeoDataFrame.from_features(zs_filter,crs=grid_file.crs))
#merge with grid file based on id
grid_file_copy = grid_file.copy()
for i in range(len(data)):
grid_file_copy = gpd.GeoDataFrame(grid_file_copy.merge(data[i][data[i].columns[1:]],on='id',how='outer'),
crs=grid_file.crs,geometry=grid_file.geometry).replace(np.nan,0)
#calculate coverage for each grid tile
sf_split = np.array_split(grid_file_copy[[i for i in grid_file_copy.columns if 'sf' in i ]],3,axis=1)
sl_split = np.array_split(grid_file_copy[[i for i in grid_file_copy.columns if 'sl' in i ]],3,axis=1)
scale_factor = (100/4/400) #(relative coverage of Sentinel-2 pixels in a 20x20m tile over 4 dates)
sf_covr = [sf_split[i].sum(1)*scale_factor for i in range(len(sf_split))]
sl_covr = [sl_split[i].sum(1)*scale_factor for i in range(len(sl_split))]
#export coverage maps
gdf_out = pd.concat([grid_file_copy[['geometry']]]+sf_covr+sl_covr,axis=1)
gdf_out.columns = ['geometry','sf_mar2018','sf_mar2019','sf_may2019','sl_mar2018','sl_mar2019','sl_may2019']
gdf_out = gdf_out[gdf_out[gdf_out.columns[1:]].sum(1)!=0]
gdf_out.to_file(r'./data/output/objective1/sargassum_coverage_coast.geojson',driver='GeoJSON')
```
| github_jupyter |
```
import pandas as pd
and_data = pd.read_csv('ANDHRA_PD.csv')
and_data.head()
del_data = pd.read_csv('DELHI.csv')
del_data.head()
kar_data = pd.read_csv('KARNATAKA.csv')
kar_data.head()
mah_data = pd.read_csv('MAHARASHTRA.csv')
mah_data.head()
tam_data = pd.read_csv('TAMIL_NADU.csv')
tam_data.head()
utt_data = pd.read_csv('UTTAR_PD.csv')
utt_data.head()
def Label(data):
inc=[]
means = data['Malaria_Incidence'].mean()
stds = data['Malaria_Incidence'].std()
thresh = means + (0.45*stds)
for i in data['Malaria_Incidence']:
if i < thresh:
inc.append(-1)
else:
inc.append(1)
print(inc)
return inc
s_data = [del_data,kar_data,mah_data,tam_data,utt_data,and_data]
for i in s_data:
hh = Label(i)
i['M_inc'] = hh
print(i)
df = and_data.append([del_data,kar_data,mah_data,tam_data,utt_data])
df.head(10)
from texttable import Texttable
full_headers = df.columns
values = list(df.isnull().sum())
nullList = []
nullList.append(['Feature','Null Values count'])
for i in range(len(full_headers)):
nullList.append([full_headers[i],values[i]])
table = Texttable()
table.add_rows(nullList)
print(table.draw())
print("\n")
import matplotlib.pyplot as plotting
print("The histograms of the attributes are given below:")
df.hist(bins=5,grid=False,layout=[6,6],figsize=[20,20])
plotting.show()
print("\n")
#k-mean-clustering
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import KMeans
x = df.iloc[:, [1, 2, 3, 4, 5, 6, 11]].values
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
#Plotting the results onto a line graph, allowing us to observe 'The elbow'
plt.plot(range(1, 11), wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') #within cluster sum of squares
plt.show()
kmeans = KMeans(n_clusters = 2, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'blue', label = '1')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'green', label = '-1')
#Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'yellow', label = 'Centroids')
plt.legend()
#K-nn
x = df.iloc[:,1:7]
y = df.iloc[:,-1]
#print(x,y)
import numpy as np
predictors = ['AQI','CO','Temp','Humidity','Precipitation','Population','Rainfall']
crux = "M_inc"
X = df.loc[:,predictors]
Y = np.ravel(df.loc[:,[crux]])
print(Y)
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score, cross_val_predict
rf = RandomForestClassifier()
sc = cross_val_score(rf, X, Y, scoring='accuracy').mean()
print("Benchmark-> Accuracy before Norm and PCA:- %s"%round(sc*100,2))
import seaborn as sns
sns.swarmplot(y='Rainfall',x='M_inc', data=df)
plotting.show()
print("\n")
sns.swarmplot(y='Temp',x='M_inc', data=df)
plotting.show()
print("\n")
sns.swarmplot(y='AQI',x='M_inc', data=df)
plotting.show()
print("\n")
import seaborn as sns
b = df
b_corr = b.drop(['Year','Malaria_Cases','Malaria_Death','Dengue_Cases','Kalaazar_Cases','Malaria_Incidence','M_inc'],axis=1)
correlation = b_corr.corr()
heatmap = sns.heatmap(correlation, cbar=True, annot=True, cmap="bwr", linewidths=.75)
heatmap.set_title("Correlation heatmap\n")
plotting.show()
print(df)
from sklearn.datasets import make_classification
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeClassifier
# define dataset
y = df.drop(columns=['Year','Malaria_Cases','Malaria_Death','Dengue_Cases','Kalaazar_Cases','Malaria_Incidence', 'AQI','CO','Temp','Humidity','Precipitation','Rainfall','Population'],axis=1)
# = x1.values.reshape(1,-1)
x = df.drop(columns=['Year','Malaria_Cases','Malaria_Death','Dengue_Cases','Kalaazar_Cases','Malaria_Incidence','M_inc'],axis=1)
print(x)
print(y)
# define RFE
rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=3)
# fit RFE
rfe.fit(x, y)
# summarize all features
for i in range(x.shape[1]):
print('Column: %d, Selected %s, Rank: %.3f' % (i, rfe.support_[i], rfe.ranking_[i]))
#Temp, Humidty and Rainfall takes the top place
label = df['M_inc']
df = df.drop(columns=['Year','Malaria_Cases','Malaria_Death','Dengue_Cases','Kalaazar_Cases','Malaria_Incidence','M_inc'],axis=1)
headers = df.columns
minimum = list(map(lambda x: round(x,4),df.min()))
mean = list(map(lambda x: round(x,4),df.mean()))
maximum = list(map(lambda x: round(x,4),df.max()))
std =list(map(lambda x: round(x,4),df.std()))
before_scaling=[]
before_scaling.append(['Feature','Min','Mean','Max','Std. Dev'])
before_scaling.append(['M_inci',label.min(),label.mean(),label.max(),label.std()])
for i in range(len(headers)):
before_scaling.append([headers[i],minimum[i],mean[i],maximum[i],std[i]])
print("\nBEFORE FEATURE SCALING")
table1 = Texttable()
table1.add_rows(before_scaling)
print(table1.draw())
print("\n")
df.shape
from sklearn import preprocessing
df = pd.DataFrame(preprocessing.scale(df.iloc[:,0:7]))
minimum = list(map(lambda x: round(x,4),df.min()))
mean = list(map(lambda x: round(x,4),df.mean()))
maximum = list(map(lambda x: round(x,4),df.max()))
std =list(map(lambda x: round(x,4),df.std()))
after_scaling=[]
after_scaling.append(['Feature','Min','Mean','Max','Std. Dev'])
after_scaling.append(['M_inci',label.min(),label.mean(),label.max(),label.std()])
for i in range(len(headers)):
after_scaling.append([headers[i],minimum[i],mean[i],maximum[i],std[i]])
print("\nAFTER FEATURE SCALING")
table2 = Texttable()
table2.add_rows(after_scaling)
print(table2.draw())
print("\n")
from sklearn.decomposition import PCA
pca = PCA(n_components=len(df.columns))
pca.fit_transform(df)
components = abs(pca.components_)
eigen_values = pca.explained_variance_
ratio_values = pca.explained_variance_ratio_
plotting.ylabel("Eigen values")
plotting.xlabel("Number of features")
plotting.title("PCA eigen values")
plotting.ylim(0, max(eigen_values))
plotting.xticks([1,2,3,4,5,6,7,8,9,10,15,20,25,30])
plotting.style.context('seaborn-whitegrid')
plotting.axhline(y=1,color='r',linestyle='--')
plotting.plot(eigen_values)
plotting.figure(figsize=(500,500))
plotting.show()
print("\n")
tableList=[]
tableList.append(["NC","SP","EV","CEV"])
for i in range(len(eigen_values)):
total=0
for j in range(i+1):
total+=ratio_values[j]
tableList.append([i+1,round(eigen_values[i],2),round(ratio_values[i],2),round(total*100,2)])
print("\nPCA Table")
table3 = Texttable()
table3.add_rows(tableList)
print(table3.draw())
print("\n")
pca_new = PCA(n_components=3)
df = pca_new.fit_transform(df)
X = pd.DataFrame(df)
Y = pd.DataFrame(label)
print(df.shape)
print(X.shape)
print(Y.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=0)
print('Original Data Set',df.shape)
print('Shape of X training set : ',X_train.shape,' || Shape of test set : ',X_test.shape)
print('Shape of Y training set : ',Y_train.shape,' || Shape of test set : ',Y_test.shape)
table_report=[]
table_report.append(["Model","Acc","Prec","Recall","F1"])
global count_lis
count_lis = 1
#PRINT FUNCTION
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
def truncate(f, n):
return np.floor(f * 10 ** n) / 10 ** n
def evaluate(sco, model, X_train, X_test, y_train, y_test):
global count_lis
y_test_pred = model.predict(X_test)
y_train_pred = model.predict(X_train)
clf_report = pd.DataFrame(classification_report(y_train, y_train_pred, output_dict=True))
acc = round(sco*100,2)
#print(f"Accuracy %s" % round(accuracy_score(y_train, y_train_pred)*100,2))
lisp =[]
lisp = truncate(clf_report.mean(axis = 1).astype(float),2)
table_report.append([count_lis,acc,lisp['precision'],lisp['recall'],lisp['f1-score']])
count_lis = count_lis+1
import warnings
warnings.filterwarnings("ignore")
#PERCEPTRON
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.linear_model import Perceptron
pct_clf = Perceptron()
sc = cross_val_score(pct_clf, X_train, Y_train, scoring='accuracy' ,cv=10).mean()
pct_clf.fit(X_train, Y_train)
evaluate(sc, pct_clf, X_train, X_test, Y_train, Y_test)
scores = {
'Perceptron': {
'Train': accuracy_score(Y_train, pct_clf.predict(X_train)),
'Test': accuracy_score(Y_test, pct_clf.predict(X_test)),
},
}
print("Perceptron Accuracy :- %s" % round(sc*100,2))
print(pct_clf.predict(X_test))
#XGBOOST
from xgboost import XGBClassifier
xgb_clf = XGBClassifier(eval_metric='mlogloss')
sc = cross_val_score(xgb_clf, X_train, Y_train, scoring='accuracy', cv=10).mean()
xgb_clf.fit(X_train, Y_train)
evaluate(sc, xgb_clf, X_train, X_test, Y_train, Y_test)
scores['xgboost'] = {
'Train': accuracy_score(Y_train, xgb_clf.predict(X_train)),
'Test': accuracy_score(Y_test, xgb_clf.predict(X_test)),
}
print("XGBoost Accuracy :- %s" % round((sc)*100,2))
print(xgb_clf.predict(X_test))
#KNN
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier()
sc = cross_val_score(knn_clf, X_train, Y_train, scoring='accuracy' ,cv=10).mean()
knn_clf.fit(X_train, Y_train)
evaluate(sc, knn_clf, X_train, X_test, Y_train, Y_test)
scores['KNN'] = {
'Train': accuracy_score(Y_train, knn_clf.predict(X_train)),
'Test': accuracy_score(Y_test, knn_clf.predict(X_test)),
}
print("KNN Accuracy after Norm and PCA :- %s" % round(sc*100,2))
print(knn_clf.predict(X_test))
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
estimators = []
log_flag = LogisticRegression()
estimators.append(('Logistic', log_flag))
pct_flag = Perceptron()
estimators.append(('Percept', pct_flag))
xgb_flag = XGBClassifier(eval_metric='mlogloss')
estimators.append(('XGBboost', xgb_flag))
knn_flag = KNeighborsClassifier()
estimators.append(('KNN', knn_flag))
#By default its hard voting
voting = VotingClassifier(estimators=estimators)
voting.fit(X_train, Y_train)
#acc = round(accuracy_score(Y_train, Y_train_pred)*100,2)
acc = round(cross_val_score(voting, X_train, Y_train ,scoring='accuracy',cv=10).mean()*100,2)
global count_lis
Y_test_pred = voting.predict(X_test)
Y_train_pred = voting.predict(X_train)
clf_report = pd.DataFrame(classification_report(Y_train, Y_train_pred, output_dict=True))
lisp =[]
lisp = truncate(clf_report.mean(axis = 1).astype(float),2)
table_report.append([count_lis,acc,lisp['precision'],lisp['recall'],lisp['f1-score']])
scores['Voting'] = {
'Train': accuracy_score(Y_train, voting.predict(X_train)),
'Test': accuracy_score(Y_test, voting.predict(X_test)),
}
print("\nPREDICTION RESULTS")
print(" 1. PERCEPTRO\n 2. XGBOOST\n 3. KNN (K-NEAREST NEIGHBOR)\n 4. ENSEMBLE VOTING CLASSIFIER")
table4 = Texttable()
table4.add_rows(table_report)
print(table4.draw())
scores_df = pd.DataFrame(scores)
scoresList=[]
scoresList.append(["Model","Train Acc","Test Acc","Diff Accuracy"])
for i in scores_df:
li = list(scores_df[i])
scoresList.append([i,round(li[0],4),round(li[1],4),"{:.2f}%".format(round(li[1]/li[0],4)*100)])
table5 = Texttable()
table5.add_rows(scoresList)
print(table5.draw())
scores_df.plot(kind='bar', figsize=(15, 8))
plotting.show()
```
| github_jupyter |
# Anisha Parikh
## Research question/interests
My research question is what are the top 10 most remembered songs and the bottom 10 least remembered songs. As well as how does recollection of the songs compare across generations.
Imports
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import project_functions2 as pf
from pandas_profiling import ProfileReport
df = pd.read_csv("../data/raw/recognition_by_generation.csv")
df
df.describe().T
ProfileReport(df).to_notebook_iframe()
```
## Milestone 3
Data is trimmed to each generation and the average of generations into three different datasets. Then it is reordered from least recognized to most recognized (top to bottom).
steps
1. remove table
2. create new column of avg between gen
3. drop irrelevant columns based on which data sheet
4. sort the values from greatest to least according to recognition of the cloumn left
5. reindex values
6. created new sheets in different orders
7. create graphs which indicate how many songs were remembered at which
```
(Mill, GenZ, Avg) = pf.load_and_process("../data/raw/recognition_by_generation.csv")
#Songs recognized by millennials
Mill
#Songs recognized by generation z
GenZ
#Average of the songs recognized by both generation
Avg
```
# Graphs
## Distribution Plot
```
sns.displot(data = Mill, bins = 15)
sns.displot(data = GenZ, bins = 15)
sns.displot(data = Avg, bins = 15)
```
From the graphs above it can be seen that there is overall low recognition for 90s songs. However, Millenials remember more songs better. This is also shown in the boxplot below.
```
sns.boxplot(data = df)
```
# Analysis
The charts above, as mentioned, show that mellinnials recognize more songs from the 90s than generaztion Z does. That being said it can also be seen that when the average of both generations is taken, there is a very low recognition of songs in general.
The datasets also show which songs were rememered most and least by each generation.
For millennials the top most rememebered songs include:
1. Hit Me Baby One More Time by Britney Spears
2. Believe by Cher
3. Wannabe by Spice Girls
4. All Star by Smash Mouth
5. Mambo No. 5 by Lou Bega
The songs least remembered by that same generation are:
1. Real, Real, Real by Jesus Jones
2. I'd Die without you by PM Dawn
3. This House by Tracie Spencer
4. Hold You Tight by Tara Kemp
5. Love Will Lead Back by Taylor Dane
For generation z the top most rememebered songs include:
1. My Heart Will Go On by Celine Dion
2. Hit Me Baby One More Time by Britney Spears
3. Wannabe by Spice Girls
4. All Star by Smash Mouth
5. Mambo No. 5 by Lou Bega
The songs least remembered by that same generation are:
1. Kickin' the Boots by H Town
2. Changing Faces by Stroke You Up
3. I'd Die without you by PM Dawn
4. Look into my Eyes by Bones Thugs N Harmony
5. Love Will Lead Back by Taylor Dane
The lists featuring the top songs might have the implication that songs written and performed by female artists might be more successful in order to gain better recognition
However, the lists featuring the least most recognized songs discredits the previous claim as it also contains a majority of female artists. That being said it could be implied that the dataset contains more female artists, or the gender identity of the artist does not contribute to their ability to be recognized.
| github_jupyter |
# Hands-on 2: How to create a fMRI analysis workflow
The purpose of this section is that you setup a fMRI analysis workflow.
# 1st-level Analysis Workflow Structure
In this notebook we will create a workflow that performs 1st-level analysis and normalizes the resulting beta weights to the MNI template. In concrete steps this means:
1. Specify 1st-level model parameters
2. Specify 1st-level contrasts
3. Estimate 1st-level contrasts
4. Normalize 1st-level contrasts
## Imports
It's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
```
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
```
## Create Nodes and Workflow connections
Let's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline.
### Specify 1st-level model parameters (stimuli onsets, duration, etc.)
The specify the 1st-level model we need the subject specific onset times and durations of the stimuli. Luckily, as we are working with a BIDS dataset, this information is nicely stored in a `tsv` file:
```
import pandas as pd
# Create the workflow here
analysis1st = Workflow(name='work_1st', base_dir='/output/')
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
import pandas as pd
from nipype.interfaces.base import Bunch
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset -10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
)]
from nipype.algorithms.modelgen import SpecifySPMModel
# Initiate the SpecifySPMModel node here
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=2.5,
high_pass_filter_cutoff=128,
subject_info=subject_info),
name="modelspec")
```
This node will also need some additional inputs, such as the preprocessed functional images, the motion parameters etc. We will specify those once we take care of the workflow data input stream.
### Specify 1st-level contrasts
To do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:
- **finger**
- **foot**
- **lips**
Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
```
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger > others','T', condition_names, [1, -0.5, -0.5]]
cont06 = ['Foot > others', 'T', condition_names, [-0.5, 1, -0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
```
### Estimate 1st-level contrasts
Before we can estimate the 1st-level contrasts, we first need to create the 1st-level design. Here you can also specify what kind of basis function you want (HRF, FIR, Fourier, etc.), if you want to use time and dispersion derivatives and how you want to model the serial correlation.
In this example I propose that you use an HRF basis function, that we model time derivatives and that we model the serial correlation with AR(1).
```
from nipype.interfaces.spm import Level1Design
# Initiate the Level1Design node here
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=2.5,
model_serial_correlations='AR(1)'),
name="level1design")
# Now that we have the Model Specification and 1st-Level Design node, we can connect them to each other:
# Connect the two nodes here
analysis1st.connect([(modelspec, level1design, [('session_info',
'session_info')])])
# Now we need to estimate the model. I recommend that you'll use a Classical: 1 method to estimate the model.
from nipype.interfaces.spm import EstimateModel
# Initiate the EstimateModel node here
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
# Now we can connect the 1st-Level Design node with the model estimation node.
# Connect the two nodes here
analysis1st.connect([(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')])])
from nipype.interfaces.spm import EstimateContrast
# Initiate the EstimateContrast node here
level1conest = Node(EstimateContrast(contrasts=contrast_list),
name="level1conest")
# Now we can connect the model estimation node with the contrast estimation node.
analysis1st.connect([(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])])
```
## Normalize 1st-level contrasts
Now that the contrasts were estimated in subject space we can put them into a common reference space by normalizing them to a specific template. In this case we will be using SPM12's Normalize routine and normalize to the SPM12 tissue probability map `TPM.nii`.
At this step you can also specify the voxel resolution of the output volumes. If you don't specify it, it will normalize to a voxel resolution of 2x2x2mm.
```
from nipype.interfaces.spm import Normalize12
# Location of the template
template = '/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Initiate the Normalize12 node here
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[2, 2, 2]
),
name="normalize")
# Connect the nodes here
analysis1st.connect([(level1conest, normalize, [('spmT_images',
'apply_to_files')])
])
```
## Datainput with `SelectFiles` and `iterables`
```
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': '/data/ds000114/sub-{subj_id}/ses-test/anat/sub-{subj_id}_ses-test_T1w.nii.gz',
'func': '/output/datasink_handson/preproc/sub-{subj_id}_detrend.nii.gz',
'mc_param': '/output/datasink_handson/preproc/sub-{subj_id}.par',
'outliers': '/output/datasink_handson/preproc/art.sub-{subj_id}_outliers.txt'
}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
# Now we can specify over which subjects the workflow should iterate.
# list of subject identifiers
subject_list = ['07']
sf.iterables = [('subj_id', subject_list)]
# Gunzip Node
from nipype.algorithms.misc import Gunzip
# Initiate the two Gunzip node here
gunzip_anat = Node(Gunzip(), name='gunzip_anat')
gunzip_func = Node(Gunzip(), name='gunzip_func')
# And as a final step, we just need to connect this SelectFiles node to the rest of the workflow.
# Connect SelectFiles node to the other nodes here
analysis1st.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')]),
(gunzip_anat, normalize, [('out_file', 'image_to_align')]),
(gunzip_func, modelspec, [('out_file', 'functional_runs')]),
(sf, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files'),
])
])
#Data output with DataSink
#Now, before we run the workflow, let's again specify a Datasink folder to only keep those files that we want to keep.
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_subj_id_', 'sub-')]
datasink.inputs.substitutions = substitutions
# Connect nodes to datasink here
analysis1st.connect([(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('spmF_images', '1stLevel.@F'),
]),
(normalize, datasink, [('normalized_files', 'normalized.@files'),
('normalized_image', 'normalized.@image'),
]),
])
```
## Visualize the workflow
Now that the workflow is finished, let's visualize it again.
```
# Create 1st-level analysis output graph
analysis1st.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_1st/graph.png')
```
## Run the Workflow
Now that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
```
analysis1st.run('MultiProc', plugin_args={'n_procs': 4})
```
## Visualize results
### First, let's look at the 1st-level Design Matrix of subject one, to verify that everything is as it should be.
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.io import loadmat
# Using scipy's loadmat function we can access SPM.mat
spmmat = loadmat('/output/datasink_handson/1stLevel/sub-07/SPM.mat',
struct_as_record=False)
designMatrix = spmmat['SPM'][0][0].xX[0][0].X
names = [i[0] for i in spmmat['SPM'][0][0].xX[0][0].name[0]]
normed_design = designMatrix / np.abs(designMatrix).max(axis=0)
fig, ax = plt.subplots(figsize=(8, 8))
plt.imshow(normed_design, aspect='auto', cmap='gray', interpolation='none')
ax.set_ylabel('Volume id')
ax.set_xticks(np.arange(len(names)))
ax.set_xticklabels(names, rotation=90);
```
### Let's look how well the normalization worked.
```
import nibabel as nb
from nilearn.plotting import plot_anat
from nilearn.plotting import plot_glass_brain
# Load GM probability map of TPM.nii
img = nb.load('/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii')
GM_template = nb.Nifti1Image(img.get_data()[..., 0], img.affine, img.header)
# Plot normalized subject anatomy
display = plot_anat('/output/datasink_handson/normalized/sub-07/wsub-07_ses-test_T1w.nii',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
# Plot raw subject anatomy
display = plot_anat('/data/ds000114/sub-07/ses-test/anat/sub-07_ses-test_T1w.nii.gz',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
```
### Let's look at the contrasts of one subject that we've just computed.
```
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/sub-07/ses-test/anat/sub-07_ses-test_T1w.nii.gz'
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0001.nii', title='average',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0002.nii', title='finger',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0003.nii', title='foot',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0004.nii', title='lip',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
```
### We can also check three additional contrasts Finger > others, Foot > others and Lips > others.
```
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0005.nii', title='fingers > other',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0006.nii', title='foot > other',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map('/output/datasink_handson/1stLevel/sub-07/spmT_0007.nii', title='lip > other',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
```
### We can plot the normalized results over a template brain
```
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wspmT_0005.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=3,
title='fingers>other');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wspmT_0006.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=3,
title='foot>other');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wspmT_0007.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=3,
title='lip>other');
```
| github_jupyter |
```
import scipy.io as sio
mat = sio.loadmat("./imdb/imdb.mat")
from IPython.core.display import Image
idx = 11114
path ='./imdb_crop/' + mat['imdb'].item()[2][0][idx][0]
print(mat['imdb'].item()[4][0][idx][0])
print(mat['imdb'].item()[2][0][idx][0])
Image(filename=path)
import numpy
embeddings = numpy.load('./embeddings.npy')
image_list = numpy.load('./image_list.npy')
file2name = {}
for idx in range(0, len(mat['imdb'].item()[2][0])):
file2name[mat['imdb'].item()[2][0][idx][0]] = mat['imdb'].item()[4][0][idx][0]
cleaned_imagelist = list(map(lambda x: x.split('/')[-2]+'/' + x.split('/')[-1], image_list))
idx = 803
print(file2name[cleaned_imagelist[idx]])
path ='./imdb_crop/' + cleaned_imagelist[idx]
Image(filename=path)
name2file = {}
for file_name, name in file2name.items():
if name in name2file:
name2file[name].append(file_name)
else:
name2file[name] = [file_name]
filtered_name2file = {k: v for k, v in name2file.items() if len(v) >= 100}
len(list(filtered_name2file.items()))
example = list(filtered_name2file.keys())[1]
print(example)
vecs = [embeddings[cleaned_imagelist.index(x)] for x in name2file[example]]
print(len(vecs))
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
dists = squareform(pdist(vecs,'cosine'))
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.metrics.pairwise import cosine_distances as cosine
labels = DBSCAN(eps=0.1, min_samples=20, metric='precomputed').fit_predict(dists)
from collections import Counter
counts = Counter(labels)
counts.pop(-1, None)
print(counts)
biggest_cluster = counts.most_common(1)
print(np.where(labels == biggest_cluster[0][0]))
print(np.where(labels == -1))
path ='./imdb_crop/' + filtered_name2file[example][74]
from IPython.core.display import Image as DImage
DImage(filename=path)
from annoy import AnnoyIndex
import random
f = 40
t = AnnoyIndex(f) # Length of item vector that will be indexed
for i in xrange(1000):
v = [random.gauss(0, 1) for z in xrange(f)]
t.add_item(i, v)
t.build(10) # 10 trees
t.save('test.ann')
u = AnnoyIndex(f)
u.load('test.ann') # super fast, will just mmap the file
print(u.get_nns_by_item(0, 1000))
from PIL import Image
def is_grey_scale(img_path="lena.jpg"):
im = Image.open(img_path).convert('RGB')
w,h = im.size
print(w,h)
for i in range(w):
for j in range(h):
r,g,b = im.getpixel((i,j))
if r != g != b: return False
return True
print(is_grey_scale(path))
```
| github_jupyter |
## UCI SMS Spam Collection Dataset
* **Input**: sms textual content. **Target**: ham or spam
* **data representation**: each sms is repesented with a **fixed-length vector of word indexes**. A word index lookup is generated from the vocabulary list.
* **words embedding**: A word embedding (dense vector) is learnt for each word. That is, each sms is presented as a matrix of (document-word-count, word-embedding-size)
* **convolution layer**: Apply filter(s) to the word-embedding matrix, before input to the fully-connected NN
* **train-data.tsv, valid-datat.tsv**, and **vocab_list.tsv** are prepared and saved in 'data/sms-spam'
```
import tensorflow as tf
from tensorflow import data
from datetime import datetime
import multiprocessing
import shutil
print(tf.__version__)
MODEL_NAME = 'sms-class-model-01'
TRAIN_DATA_FILES_PATTERN = 'data/sms-spam/train-*.tsv'
VALID_DATA_FILES_PATTERN = 'data/sms-spam/valid-*.tsv'
VOCAB_LIST_FILE = 'data/sms-spam/vocab_list.tsv'
N_WORDS_FILE = 'data/sms-spam/n_words.tsv'
RESUME_TRAINING = False
MULTI_THREADING = True
```
## 1. Define Dataset Metadata
```
MAX_DOCUMENT_LENGTH = 100
PAD_WORD = '#=KS=#'
HEADER = ['class', 'sms']
HEADER_DEFAULTS = [['NA'], ['NA']]
TEXT_FEATURE_NAME = 'sms'
TARGET_NAME = 'class'
WEIGHT_COLUNM_NAME = 'weight'
TARGET_LABELS = ['spam', 'ham']
with open(N_WORDS_FILE) as file:
N_WORDS = int(file.read())+2
print(N_WORDS)
```
## 2. Define Data Input Function
### a. TSV parsing logic
```
def parse_tsv_row(tsv_row):
columns = tf.decode_csv(tsv_row, record_defaults=HEADER_DEFAULTS, field_delim='\t')
features = dict(zip(HEADER, columns))
target = features.pop(TARGET_NAME)
# giving more weight to "spam" records are the are only 13% of the training set
features[WEIGHT_COLUNM_NAME] = tf.cond( tf.equal(target,'spam'), lambda: 6.6, lambda: 1.0 )
return features, target
```
### b. Data pipeline input function
```
def parse_label_column(label_string_tensor):
table = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS))
return table.lookup(label_string_tensor)
def input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=1,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1
buffer_size = 2 * batch_size + 1
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Thread Count: {}".format(num_threads))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size)
dataset = dataset.map(lambda tsv_row: parse_tsv_row(tsv_row),
num_parallel_calls=num_threads)
dataset = dataset.batch(batch_size)
dataset = dataset.repeat(num_epochs)
dataset = dataset.prefetch(buffer_size)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, parse_label_column(target)
```
## 3. Define Model Function
```
def process_text(text_feature):
# Load vocabolary lookup table to map word => word_id
vocab_table = tf.contrib.lookup.index_table_from_file(vocabulary_file=VOCAB_LIST_FILE,
num_oov_buckets=1, default_value=-1)
# Get text feature
smss = text_feature
# Split text to words -> this will produce sparse tensor with variable-lengthes (word count) entries
words = tf.string_split(smss)
# Convert sparse tensor to dense tensor by padding each entry to match the longest in the batch
dense_words = tf.sparse_tensor_to_dense(words, default_value=PAD_WORD)
# Convert word to word_ids via the vocab lookup table
word_ids = vocab_table.lookup(dense_words)
# Create a word_ids padding
padding = tf.constant([[0,0],[0,MAX_DOCUMENT_LENGTH]])
# Pad all the word_ids entries to the maximum document length
word_ids_padded = tf.pad(word_ids, padding)
word_id_vector = tf.slice(word_ids_padded, [0,0], [-1, MAX_DOCUMENT_LENGTH])
# Return the final word_id_vector
return word_id_vector
def model_fn(features, labels, mode, params):
hidden_units = params.hidden_units
output_layer_size = len(TARGET_LABELS)
embedding_size = params.embedding_size
window_size = params.window_size
stride = int(window_size/2)
filters = params.filters
# word_id_vector
word_id_vector = process_text(features[TEXT_FEATURE_NAME])
# print("word_id_vector: {}".format(word_id_vector)) # (?, MAX_DOCUMENT_LENGTH)
# layer to take each word_id and convert it into vector (embeddings)
word_embeddings = tf.contrib.layers.embed_sequence(word_id_vector, vocab_size=N_WORDS,
embed_dim=embedding_size)
#print("word_embeddings: {}".format(word_embeddings)) # (?, MAX_DOCUMENT_LENGTH, embbeding_size)
# convolution
words_conv = tf.layers.conv1d(word_embeddings, filters=filters, kernel_size=window_size,
strides=stride, padding='SAME', activation=tf.nn.relu)
#print("words_conv: {}".format(words_conv)) # (?, MAX_DOCUMENT_LENGTH/stride, filters)
words_conv_shape = words_conv.get_shape()
dim = words_conv_shape[1] * words_conv_shape[2]
input_layer = tf.reshape(words_conv,[-1, dim])
#print("input_layer: {}".format(input_layer)) # (?, (MAX_DOCUMENT_LENGTH/stride)*filters)
if hidden_units is not None:
# Create a fully-connected layer-stack based on the hidden_units in the params
hidden_layers = tf.contrib.layers.stack(inputs=input_layer,
layer=tf.contrib.layers.fully_connected,
stack_args= hidden_units,
activation_fn=tf.nn.relu)
# print("hidden_layers: {}".format(hidden_layers)) # (?, last-hidden-layer-size)
else:
hidden_layers = input_layer
# Connect the output layer (logits) to the hidden layer (no activation fn)
logits = tf.layers.dense(inputs=hidden_layers,
units=output_layer_size,
activation=None)
# print("logits: {}".format(logits)) # (?, output_layer_size)
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
probabilities = tf.nn.softmax(logits)
predicted_indices = tf.argmax(probabilities, 1)
# Convert predicted_indices back into strings
predictions = {
'class': tf.gather(TARGET_LABELS, predicted_indices),
'probabilities': probabilities
}
export_outputs = {
'prediction': tf.estimator.export.PredictOutput(predictions)
}
# Provide an estimator spec for `ModeKeys.PREDICT` modes.
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
export_outputs=export_outputs)
# weights
weights = features[WEIGHT_COLUNM_NAME]
# Calculate loss using softmax cross entropy
loss = tf.losses.sparse_softmax_cross_entropy(
logits=logits, labels=labels,
weights=weights
)
tf.summary.scalar('loss', loss)
if mode == tf.estimator.ModeKeys.TRAIN:
# Create Optimiser
optimizer = tf.train.AdamOptimizer(params.learning_rate)
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Provide an estimator spec for `ModeKeys.TRAIN` modes.
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
if mode == tf.estimator.ModeKeys.EVAL:
probabilities = tf.nn.softmax(logits)
predicted_indices = tf.argmax(probabilities, 1)
# Return accuracy and area under ROC curve metrics
labels_one_hot = tf.one_hot(
labels,
depth=len(TARGET_LABELS),
on_value=True,
off_value=False,
dtype=tf.bool
)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(labels, predicted_indices, weights=weights),
'auroc': tf.metrics.auc(labels_one_hot, probabilities, weights=weights)
}
# Provide an estimator spec for `ModeKeys.EVAL` modes.
return tf.estimator.EstimatorSpec(mode,
loss=loss,
eval_metric_ops=eval_metric_ops)
def create_estimator(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn,
params=hparams,
config=run_config)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
```
## 4. Run Experiment
### a. Set HParam and RunConfig
```
TRAIN_SIZE = 4179
NUM_EPOCHS = 10
BATCH_SIZE = 250
EVAL_AFTER_SEC = 60
TOTAL_STEPS = int((TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
embedding_size = 3,
window_size = 3,
filters = 2,
hidden_units=None, #[8],
max_steps = TOTAL_STEPS,
learning_rate = 0.01
)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.estimator.RunConfig(
log_step_count_steps=5000,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
print("That is 1 evaluation step after each",EVAL_AFTER_SEC,"training seconds")
```
### b. Define serving function
```
def serving_input_fn():
receiver_tensor = {
'sms': tf.placeholder(tf.string, [None]),
}
features = {
key: tensor
for key, tensor in receiver_tensor.items()
}
return tf.estimator.export.ServingInputReceiver(
features, receiver_tensor)
```
### c. Define TrainSpec and EvaluSpec
```
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: input_fn(
TRAIN_DATA_FILES_PATTERN,
mode = tf.estimator.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: input_fn(
VALID_DATA_FILES_PATTERN,
mode=tf.estimator.ModeKeys.EVAL,
batch_size=hparams.batch_size
),
exporters=[tf.estimator.LatestExporter(
name="predict", # the name of the folder in which the model will be exported to under export
serving_input_receiver_fn=serving_input_fn,
exports_to_keep=1,
as_text=True)],
steps=None,
throttle_secs = EVAL_AFTER_SEC
)
```
### d. Run Experiment via train_and_evaluate
```
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(run_config, hparams)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
```
## 5. Evaluate the Model
```
TRAIN_SIZE = 4179
TEST_SIZE = 1393
train_input_fn = lambda: input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE)
test_input_fn = lambda: input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TEST_SIZE)
estimator = create_estimator(run_config, hparams)
train_results = estimator.evaluate(input_fn=train_input_fn, steps=1)
print()
print("######################################################################################")
print("# Train Measures: {}".format(train_results))
print("######################################################################################")
test_results = estimator.evaluate(input_fn=test_input_fn, steps=1)
print()
print("######################################################################################")
print("# Test Measures: {}".format(test_results))
print("######################################################################################")
```
## 6. Predict Using Serving Function
```
import os
export_dir = model_dir +"/export/predict/"
saved_model_dir = export_dir + "/" + os.listdir(path=export_dir)[-1]
print(saved_model_dir)
print("")
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="prediction"
)
output = predictor_fn(
{
'sms':[
'ok, I will be with you in 5 min. see you then',
'win 1000 cash free of charge promo hot deal sexy',
'hot girls sexy tonight call girls waiting call chat'
]
}
)
print(output)
```
| github_jupyter |
# Assignment 1: Neural Networks
Implement your code and answer all the questions. Once you complete the assignment and answer the questions inline, you can download the report in pdf (File->Download as->PDF) and send it to us, together with the code.
**Don't submit additional cells in the notebook, we will not check them. Don't change parameters of the learning inside the cells.**
Assignment 1 consists of 4 sections:
* **Section 1**: Data Preparation
* **Section 2**: Multinomial Logistic Regression
* **Section 3**: Backpropagation
* **Section 4**: Neural Networks
```
# Import necessary standard python packages
import numpy as np
import matplotlib.pyplot as plt
# Setting configuration for matplotlib
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['axes.labelsize'] = 20
# Import python modules for this assignment
from uva_code.cifar10_utils import get_cifar10_raw_data, preprocess_cifar10_data
from uva_code.solver import Solver
from uva_code.losses import SoftMaxLoss, CrossEntropyLoss, HingeLoss
from uva_code.layers import LinearLayer, ReLULayer, SigmoidLayer, TanhLayer, SoftMaxLayer, ELULayer
from uva_code.models import Network
from uva_code.optimizers import SGD
%load_ext autoreload
%autoreload 2
```
## Section 1: Data Preparation
In this section you will download [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html "CIFAR10") data which you will use in this assignment.
**Make sure that everything has been downloaded correctly and all images are visible.**
```
# Get raw CIFAR10 data. For Unix users the script to download CIFAR10 dataset (get_cifar10.sh) is provided and
# it is used inside get_cifar10_raw_data() function. If it doesn't work then manually download the data from
# http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz and extract it to cifar-10-batches-py folder inside
# cifar10 folder.
# Downloading the data can take several minutes.
X_train_raw, Y_train_raw, X_test_raw, Y_test_raw = get_cifar10_raw_data()
#Checking shapes, should be (50000, 32, 32, 3), (50000, ), (10000, 32, 32, 3), (10000, )
print 'Train data shape: ', X_train_raw.shape
print 'Train labels shape: ', Y_train_raw.shape
print 'Test data shape: ', X_test_raw.shape
print 'Test labels shape: ', Y_test_raw.shape
# Visualize CIFAR10 data
samples_per_class = 10
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
can = np.zeros((320, 320, 3),dtype='uint8')
for i, cls in enumerate(classes):
idxs = np.flatnonzero(Y_train_raw == i)
idxs = np.random.choice(idxs, samples_per_class, replace = False)
for j in range(samples_per_class):
can[32 * i:32 * (i + 1), 32 * j:32 * (j + 1),:] = X_train_raw[idxs[j]]
plt.xticks([], [])
plt.yticks(range(16, 320, 32), classes)
plt.title('CIFAR10', fontsize = 20)
plt.imshow(can)
plt.show()
# Normalize CIFAR10 data by subtracting the mean image. With these data you will work in the rest of assignment.
# The validation subset will be used for tuning the hyperparameters.
X_train, Y_train, X_val, Y_val, X_test, Y_test = preprocess_cifar10_data(X_train_raw, Y_train_raw,
X_test_raw, Y_test_raw, num_val = 1000)
#Checking shapes, should be (49000, 3072), (49000, ), (1000, 3072), (1000, ), (10000, 3072), (10000, )
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', Y_train.shape
print 'Val data shape: ', X_val.shape
print 'Val labels shape: ', Y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', Y_test.shape
```
### Data Preparation: Question 1 [4 points]
Neural networks and deep learning methods prefer the input variables to contain as raw data as possible.
But in the vast majority of cases data need to be preprocessed. Suppose, you have two types of non-linear activation functions ([Sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function), [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) and two types of normalization ([Per-example mean substraction](http://ufldl.stanford.edu/wiki/index.php/Data_Preprocessing#Per-example_mean_subtraction), [Standardization](http://ufldl.stanford.edu/wiki/index.php/Data_Preprocessing#Feature_Standardization)). Which one should you use for each case and why? For example, in the previous cell we used per-example mean substraction.
**Your Answer**: You should use standardization for the sigmoid activation function. This is because if you have large outliers in the data, you can will end up having vanishing gradients and having a lot of dead neurons. Standardization will also make sure the neurons converge faster.
You can use both standardization and mean subtraction for ReLu's. This is because the function doesn't have problems with vanishing and exploding gradients. You can however still get dead neurons if the value is stuck at 0. This causes the gradient to be 0 forever.
# Unchecked...
## Section 2: Multinomial Logistic Regression [5 points]
In this section you will get started by implementing a linear classification model called [Multinomial Logistic Regression](http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/). Later on you will extend this model to a neural network. You will train it by using the [mini-batch Stochastic Gradient Descent algorithm](http://sebastianruder.com/optimizing-gradient-descent/index.html#minibatchgradientdescent). You should implement how to sample batches, how to compute the loss, how to compute the gradient of the loss with respect to the parameters of the model and how to update the parameters of the model.
You should get around 0.35 accuracy on the validation and test sets with the provided parameters.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
learning_rate = 1e-7
weight_decay = 3e+4
weight_scale = 0.0001
########################################################################################
# TODO: #
# Initialize the weights W using a normal distribution with mean = 0 and std = #
# weight_scale. Initialize the biases b with 0. #
########################################################################################
W = np.random.normal(scale = weight_scale, size = (X_train.shape[1], num_classes))
b = np.zeros(shape = num_classes)
########################################################################################
# END OF YOUR CODE #
########################################################################################
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
def softmax_loss(weights, batch, targets):
x = np.exp((np.dot(batch, weights) + b))
results = x / np.sum(x, axis = 1)[:, None]
cross_entropy = 0
for key,value in enumerate(results[:]):
value = value[targets[key]]
cross_entropy +=np.log(value)
return -(1./batch_size)*cross_entropy + (weight_decay/2)*np.sum(weights**2)
def accuracy(weights, batch, target):
x = np.exp((np.dot(batch, weights) + b))
results = x / np.sum(x, axis = 1)[:, None]
predictions = np.argmax(results, axis = 1)
correct = sum([int(i == predictions[key]) for key, i in enumerate(target) ])/ float(len(target))
return correct
def get_gradient(weights, batch, target):
x = np.exp((np.dot(batch, weights) + b))
p_y = x / np.sum(x, axis = 1)[:, None]
oneHot = np.zeros((batch_size, 10))
oneHot[np.arange(batch_size), target] = 1
diff = (oneHot - p_y)
update = np.zeros((weights.shape[0], 10))
for x in range(batch_size):
update += np.outer(batch[x],diff[x])
return ((-update/batch_size) + weight_decay*weights), (-np.sum(diff, axis = 0)/batch_size)
for iteration in range(num_iterations):
########################################################################################
# TODO: #
# Sample a random mini-batch with the size of batch_size from the train set. Put the #
# images to X_train_batch and labels to Y_train_batch variables. #
########################################################################################
sample = np.random.choice(X_train.shape[0], batch_size, replace = False)
X_train_batch = X_train[sample]
Y_train_batch = Y_train[sample]
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Compute the loss and the accuracy of the multinomial logistic regression classifier #
# on X_train_batch, Y_train_batch. #
########################################################################################
train_loss = softmax_loss(W, X_train_batch, Y_train_batch)
train_acc = accuracy(W, X_train_batch, Y_train_batch)
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Compute the gradients of the loss with the respect to the weights and biases. Put #
# them in dW and db variables. #
########################################################################################
# NOTE: implemented this with regularization
dW, db = get_gradient(W, X_train_batch, Y_train_batch)
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Update the weights W and biases b using the Stochastic Gradient Descent update rule. #
########################################################################################
W -= learning_rate* dW
b -= learning_rate* db
########################################################################################
# END OF YOUR CODE #
########################################################################################
if iteration % val_iteration == 0 or iteration == num_iterations - 1:
########################################################################################
# TODO: #
# Compute the loss and the accuracy on the validation set. #
########################################################################################
val_loss = softmax_loss(W, X_val, Y_val)
val_acc = accuracy(W, X_val, Y_val)
########################################################################################
# END OF YOUR CODE #
########################################################################################
train_loss_history.append(train_loss)
train_acc_history.append(train_acc)
val_loss_history.append(val_loss)
val_acc_history.append(val_acc)
# Output loss and accuracy during training
print("Iteration {0:d}/{1:d}. Train Loss = {2:.3f}, Train Accuracy = {3:.3f}".
format(iteration, num_iterations, train_loss, train_acc))
print("Iteration {0:d}/{1:d}. Validation Loss = {2:.3f}, Validation Accuracy = {3:.3f}".
format(iteration, num_iterations, val_loss, val_acc))
########################################################################################
# TODO: #
# Compute the accuracy on the test set. #
########################################################################################
test_acc = accuracy(W, X_test, Y_test)
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Test Accuracy = {0:.3f}".format(test_acc))
# Visualize a learning curve of multinomial logistic regression classifier
plt.subplot(2, 1, 1)
plt.plot(range(0, num_iterations + 1, val_iteration), train_loss_history, '-o', label = 'train')
plt.plot(range(0, num_iterations + 1, val_iteration), val_loss_history, '-o', label = 'validation')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.subplot(2, 1, 2)
plt.plot(range(0, num_iterations + 1, val_iteration), train_acc_history, '-o', label='train')
plt.plot(range(0, num_iterations + 1, val_iteration), val_acc_history, '-o', label='validation')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
```
### Multinomial Logistic Regression: Question 1 [4 points]
What is the value of the loss and the accuracy you expect to obtain at iteration = 0 and why? Consider weight_decay = 0.
**Your Answer**: Given that all values of the weights are around 0. You can assume that the largest output of the softmax classifier will be determined at random. Therefore it would simply be the average of the proportions of the classes.
### Multinomial Logistic Regression: Question 2 [4 points]
Name at least three factors that determine the size of batches in practice and briefly motivate your answers. The factors might be related to computational or performance aspects.
**Your Answer**: (Source: Deep learning chapter 8)
- Larger batches give a more accurate estimate of the true gradient, however there are diminishing marginal returns to increasing mini batch size. An optimal batch size would balance the trade off in increased computation time and a more accurate gradient.
- Having a small batch size can have a regularizing effect on the gradient due to the increased random noise. However too small a batch size can increase runtime because of having to calculate more steps.
- specific hardware architecture runs better with a specific batch size. For example GPU's tend to run better with batch sizes of power 2.
### Mulinomial Logistic Regression: Question 3 [4 points]
Does the learning rate depend on the batch size? Explain how you should change the learning rate with respect to changes of the batch size.
Name two extreme choices of a batch size and explain their advantages and disadvantages.
**Your Answer**: The learning rate doesn't directly depend on the batch size. However given that the amount of error that is included in your updates when you have a smaller batch size you may want to set a lower learning rate.
Extreme examples include, having a batch size of 1 (stochastic gradient descent) or having a batch size of all of your traning data. The advantage of having a batch size of 1 is that it has a regularizing effect, this is due to the error that is intrinsic in such a small sample size, which gives a slightly inaccurate estimate of the gradient. The disadvantage is that it will take very long to converge. The advantage of the large batch size is that it will give the most accurate estimate of the gradient: Disadvantages include overfitting, because it will be perfectly fitted to your training data and memory issues. This is due to the fact that you have to load the entire dataset into memory.
### Multinomial Logistic Regression: Question 4 [4 points]
How can you describe the rows of weight matrix W? What are they representing? Why?
**Your Answer**: The rows of the weights W are the weights for each of the pixels, that is used to predict the softmax class. In total there are 10 such rows. These weight should have the right color values that appear in that part of the pictures of that class.
**Hint**: Before answering the question visualize rows of weight matrix W in the cell below.
```
W= W.T
# sanity check
vals = W.copy()
plt.hist(list(vals.reshape(-1)), bins = 100 )
plt.show()
########################################################################################
# TODO: #
# Visualize the learned weights for each class. #
########################################################################################
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
can = np.zeros((32,320 , 3) , dtype = 'uint8')
for i, cls in enumerate(classes):
image = np.reshape(W[i].copy(), (32,32,3))
# normalize per channel between 0 and 1
# could normalize them per channel, if necessary
#image[:,:,0] -= image[:,:,0].min()
#image[:,:,1] -= image[:,:,1].min()
#image[:,:,2] -= image[:,:,2].min()
image -= image.min()
image *= (255.0/image.max())
can[0:32 , 32 * i :32 * (i + 1) ,:] = np.array(image, dtype = 'uint8')
plt.figure(figsize = (15,4))
plt.xticks(range(16, 320, 32), classes)
plt.yticks([], [])
plt.title('Weights Visualization', fontsize = 20)
plt.imshow(can)
plt.show()
########################################################################################
# END OF YOUR CODE #
########################################################################################
```
## Section 3: Backpropagation
Follow the instructions and solve the tasks in paper_assignment_1.pdf. Write your solutions in a separate pdf file. You don't need to put anything here.
## Section 4: Neural Networks [10 points]
A modular implementation of neural networks allows to define deeper and more flexible architectures. In this section you will implement the multinomial logistic regression classifier from the Section 2 as a one-layer neural network that consists of two parts: a linear transformation layer (module 1) and a softmax loss layer (module 2).
You will implement the multinomial logistic regression classifier as a modular network by following next steps:
1. Implement the forward and backward passes for the linear layer in **layers.py** file. Write your code inside the ***forward*** and ***backward*** methods of ***LinearLayer*** class. Compute the regularization loss of the weights inside the ***layer_loss*** method of ***LinearLayer*** class.
2. Implement the softmax loss computation in **losses.py** file. Write your code inside the ***SoftMaxLoss*** function.
3. Implement the ***forward***, ***backward*** and ***loss*** methods for the ***Network*** class inside the **models.py** file.
4. Implement the SGD update rule inside ***SGD*** class in **optimizers.py** file.
5. Implement the ***train_on_batch***, ***test_on_batch***, ***fit***, ***predcit***, ***score***, ***accuracy*** methods of ***Solver*** class in ***solver.py*** file.
You should get the same results for the next cell as in Section 2. **Don't change the parameters**.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
learning_rate = 1e-7
weight_decay = 3e+4
weight_scale = 0.0001
########################################################################################
# TODO: #
# Build the multinomial logistic regression classifier using the Network model. You #
# will need to use add_layer and add_loss methods. Train this model using Solver class #
# with SGD optimizer. In configuration of the optimizer you need to specify only #
# learning rate. Use the fit method to train classifier. Don't forget to include #
# X_val and Y_val in arguments to output the validation loss and accuracy during #
# training. Set the verbose to True to compare with the multinomial logistic #
# regression classifier from the Section 2. #
########################################################################################
layer_params = {'input_size': X_train.shape[1], 'output_size':10, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_loss(SoftMaxLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = True, num_iterations = num_iterations)
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Compute the accuracy on the test set. #
########################################################################################
test_acc = solver.score(X_test,Y_test)
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Test Accuracy = {0:.3f}".format(test_acc))
```
### Neural Networks: Task 1 [5 points]
Tuning hyperparameters is very important even for multinomial logistic regression.
What are the best learning rate and weight decay? What is test accuracy of the model trained with the best hyperparameters values?
**Your Answer**: Learning rate = 1.000000e-06, weight decay = 3.000000e+03: Validation Accuracy = 0.409
The test accuracy is 0.385
***Hint:*** You should be able to get the test accuracy around 0.4.
Implement the tuning of hyperparameters (learning rate and weight decay) in the next cell.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
weight_scale = 0.0001
# You should try diffierent range of hyperparameters.
# added some values here! - Riaan
learning_rates = [0.9e-6,1e-6, 1.1e-6 ]
weight_decays = [ 2.8e+03, 3e+03, 3.2e+3 ]
#,3e+04, 3e+05
best_val_acc = -1
best_solver = None
for learning_rate in learning_rates:
for weight_decay in weight_decays:
########################################################################################
# TODO: #
# Implement the tuning of hyperparameters for the multinomial logistic regression. Save#
# maximum of the validation accuracy in best_val_acc and corresponding solver to #
# best_solver variables. Store the maximum of the validation score for the current #
# setting of the hyperparameters in cur_val_acc variable. #
########################################################################################
layer_params = {'input_size': X_train.shape[1], 'output_size':10, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_loss(SoftMaxLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = False, num_iterations = num_iterations)
cur_val_acc = solver.score(X_val, Y_val)
if cur_val_acc > best_val_acc:
best_solver = solver
best_val_acc = cur_val_acc
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Learning rate = {0:e}, weight decay = {1:e}: Validation Accuracy = {2:.3f}".format(
learning_rate, weight_decay, cur_val_acc))
########################################################################################
# TODO: #
# Compute the accuracy on the test set for the best solver. #
########################################################################################
test_acc = best_solver.score(X_test, Y_test)
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Best Test Accuracy = {0:.3f}".format(test_acc))
```
### Neural Networks: Task 2 [5 points]
Implement a two-layer neural network with a ReLU activation function. Write your code for the ***forward*** and ***backward*** methods of ***ReLULayer*** class in **layers.py** file.
Train the network with the following structure: linear_layer-relu-linear_layer-softmax_loss. You should get the accuracy on the test set around 0.44.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Number of hidden units in a hidden layer.
num_hidden_units = 100
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
learning_rate = 2e-3
weight_decay = 0
weight_scale = 0.0001
########################################################################################
# TODO: #
# Build the model with the structure: linear_layer-relu-linear_layer-softmax_loss. #
# Train this model using Solver class with SGD optimizer. In configuration of the #
# optimizer you need to specify only the learning rate. Use the fit method to train. #
########################################################################################
layer_params = {'input_size': X_train.shape[1], 'output_size':num_hidden_units, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_layer(ReLULayer(layer_params))
layer_params2 = {'input_size': num_hidden_units, 'output_size':10, 'weight_decay': weight_decay }
model.add_layer(LinearLayer(layer_params2))
model.add_loss(SoftMaxLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = True, num_iterations = num_iterations)
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Compute the accuracy on the test set. #
########################################################################################
test_acc = solver.score(X_test, Y_test)
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Test Accuracy = {0:.3f}".format(test_acc))
```
### Neural Networks: Task 3 [5 points]
Why the ReLU layer is important? What will happen if we exclude this layer? What will be the accuracy on the test set?
**Your Answer**: Relu takes care of the vanishing gradient problem, which allows us to make our networks bigger. As compared to other activation functions such as the sigmoid activation function, the relu gets stuck less. You end up with less dead neurons. If we exclude the relu layer, we will end up with a network that has more dead neurons, and is somewhat harder to train. The accuracy on the test set will be lower.
Implement other activation functions: [Sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function), [Tanh](https://en.wikipedia.org/wiki/Hyperbolic_function#Hyperbolic_tangent) and [ELU](https://arxiv.org/pdf/1511.07289v3.pdf) functions.
Write your code for the ***forward*** and ***backward*** methods of ***SigmoidLayer***, ***TanhLayer*** and ***ELULayer*** classes in **layers.py** file.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Number of hidden units in a hidden layer.
num_hidden_units = 100
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
learning_rate = 2e-3
weight_decay = 0
weight_scale = 0.0001
# Store results here
results = {}
layers_name = ['ReLU', 'Sigmoid', 'Tanh', 'ELU']
layers = [ReLULayer, SigmoidLayer, TanhLayer, ELULayer]
for layer_name, layer in zip(layers_name, layers):
########################################################################################
# Build the model with the structure: linear_layer-activation-linear_layer-softmax_loss#
# Train this model using Solver class with SGD optimizer. In configuration of the #
# optimizer you need to specify only the learning rate. Use the fit method to train. #
# Store validation history in results dictionary variable. #
########################################################################################
layer_params = {'input_size': X_train.shape[1], 'output_size':num_hidden_units, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_layer(layer(layer_params))
layer_params2 = {'input_size': num_hidden_units, 'output_size':10, 'weight_decay': weight_decay }
model.add_layer(LinearLayer(layer_params2))
model.add_loss(SoftMaxLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
_,_,_,val_acc_history= solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = True, num_iterations = num_iterations)
########################################################################################
# END OF YOUR CODE #
########################################################################################
results[layer_name] = val_acc_history
# Visualize a learning curve for different activation functions
for layer_name in layers_name:
plt.plot(range(0, num_iterations + 1, val_iteration), results[layer_name], '-o', label = layer_name)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
```
### Neural Networks: Task 4 [10 points]
Although typically a [Softmax](https://en.wikipedia.org/wiki/Softmax_function) layer is coupled with a [Cross Entropy loss](https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression), this is not necessary and you can use a different loss function. Next, implement the network with the Softmax layer paired with a [Hinge loss](https://en.wikipedia.org/wiki/Hinge_loss). Beware, with the Softmax layer all the output dimensions depend on all the input dimensions, hence, you need to compute the Jacobian of derivatives $\frac{\partial o_i}{dx_j}$.
Implement the ***forward*** and ***backward*** methods for
***SoftMaxLayer*** in **layers.py** file and ***CrossEntropyLoss*** and ***HingeLoss*** in **losses.py** file.
Results of using SoftMaxLoss and SoftMaxLayer + CrossEntropyLoss should be the same.
```
# DONT CHANGE THE SEED AND THE DEFAULT PARAMETERS. OTHERWISE WE WILL NOT BE ABLE TO CORRECT YOUR ASSIGNMENT!
# Seed
np.random.seed(42)
# Default parameters.
num_iterations = 1500
val_iteration = 100
batch_size = 200
learning_rate = 2e-3
weight_decay = 0
weight_scale = 0.0001
########################################################################################
# TODO: #
# Build the model with the structure: #
# linear_layer-relu-linear_layer-softmax_layer-hinge_loss. #
# Train this model using Solver class with SGD optimizer. In configuration of the #
# optimizer you need to specify only the learning rate. Use the fit method to train. #
########################################################################################
print '######## Sanity Check for the Softmax layer, otherwise this isn\'t tested #######'
layer_params = {'input_size': X_train.shape[1], 'output_size':num_hidden_units, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_layer(ReLULayer(layer_params))
layer_params2 = {'input_size': num_hidden_units, 'output_size':10, 'weight_decay': weight_decay }
model.add_layer(LinearLayer(layer_params2))
model.add_layer(SoftMaxLayer())
model.add_loss(CrossEntropyLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = True, num_iterations = num_iterations)
print 'As can be seen this has the exact same validations and training scores as the SoftmaxLoss layer.\n'
print '######## Now on to the Hinge Loss #######'
layer_params = {'input_size': X_train.shape[1], 'output_size':num_hidden_units, 'weight_decay': weight_decay }
model = Network()
model.add_layer(LinearLayer(layer_params))
model.add_layer(ReLULayer(layer_params))
layer_params2 = {'input_size': num_hidden_units, 'output_size':10, 'weight_decay': weight_decay }
model.add_layer(LinearLayer(layer_params2))
model.add_layer(SoftMaxLayer())
model.add_loss(HingeLoss)
optimizer = SGD()
optimizer_config = {'learning_rate': learning_rate}
solver = Solver(model)
solver.fit(X_train, Y_train, optimizer,
x_val = X_val, y_val = Y_val,
optimizer_config = optimizer_config,
verbose = True, num_iterations = num_iterations)
########################################################################################
# END OF YOUR CODE #
########################################################################################
########################################################################################
# TODO: #
# Compute the accuracy on the test set. #
########################################################################################
test_acc = solver.score(X_test, Y_test)
########################################################################################
# END OF YOUR CODE #
########################################################################################
print("Test Accuracy = {0:.3f}".format(test_acc))
```
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import numpy as np
import scipy.stats as stats
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import random
import patsy
sns.set(style="whitegrid")
```
# Logistic Regression
In the last section, we looked at how we can use a linear model to fit a numerical target variable like child IQ (or price or height or weight). Regardless of the method you use, this is called a *regression problem*. Linear regression is one way of solving the regression problem.
When your target variable is a categorical variable, this is a *classification problem*. For now we'll work only with the case where there are two outcomes or labels.
With our numerical $y$, we started with a model:
$\hat{y} = N(\beta_0, \sigma)$
with the innovation that we could replace the mean in the normal distribution with a linear function of features, $f(X)$.
We can do the same thing for classification. If we have a binary categorical variable $y$, it has a Bernoulli distribution with probability $p$ as the parameter. We can estimate $y$ as:
$\hat{y} = p$
or the fraction of "successes" in the data. But what if we did the same thing as before? What if $p$ was a function of additional features? We would have:
$\hat{y} = \beta_0 + \beta_1 x$
and we would have a model that represented how the probability of $y$ changes as $x$ changes. Although this sounds good, there is a problem. $\beta_0 + \beta_1 x$ is not bounded to the range (0, 1) which we require for probabilities. But it does turn out that there is a solution: we can use a transformation to keep the value in the range (0, 1), the logistic function.
## The Logistic Function
The logistic function is:
$logistic(z) = logit^{-1}(z) = \frac{e^z}{1 + e^z} = \frac{1}{1 + e^{-z}}$
And it looks like the following:
```
def logistic( z):
return 1.0 / (1.0 + np.exp( -z))
figure = plt.figure(figsize=(5,4))
axes = figure.add_subplot(1, 1, 1)
xs = np.linspace( -10, 10, 100)
ys = logistic( xs)
axes.plot( xs, ys)
axes.set_ylim((-0.1, 1.1))
axes.set_title("Logistic Function")
plt.show()
plt.close()
```
No matter what the value of $x$, the value of $y$ is always between 0 and 1 which is exactly what we need for a probability.
There are a few additional things to note at this point. First, there is not a single definition of logistic regression. Gelman defines logistic regression as:
$P(y=1) = logit^{-1}(\beta_0 + \beta_1 x)$
in terms of the inverse logit function. Such a function returns the probability that $y = 1$. There are other possibilites (for example, a general maximum entropy model).
Second, interpreting the coefficients becomes a bit of a problem. Let's assume that we have no features and only have:
$P(y=1) = logit^{-1}(\beta_0)$
what, exactly, is $\beta_0$? It's not a probability because the probability interpretation only takes place once we have transformed the result using the inverse logit function. We take note of the following truism:
$logit^{-1}( logit( p)) = p$
This is simply what it means to be an inverse function of some other function. But the interesting thing is that this means that:
$\beta_0 = logit( p)$
and we do know what $logit(p)$ is, it's the *log odds*. $logit$ is defined as:
$logit(p) = log(\frac{p}{1-p})$
if $p$ is the probability of an event, then $\frac{p}{1-p}$ is the log odds of the event (the ratio of the probability for an event and the probability against the event).
The third difficulty is that the logistic regression is non-linear. For linear regression, the slopes of the curve (a line) are constant (the $\beta$s) and while logistic regression has a linear predictor, the result is non-linear in the probability space. For example, a 0.4 point increase in log odds from 0.0 to 0.4 increases probability from 50% to 60% but a 0.4 point increase in log odds from 2.2 to 2.6 only increases probability from 90% to 93%.
Finally, we lose a lot of our ability to visualize what's going on when we moved from linear regression to logistic regression.
## What Loss does Logistic Regression minimize?
Several times now, we've turned the question of variance on its head and asked, "what estimate minimizes my error?". For a single prediction, $\beta_0$, of a numerical variable, we know that we want some value that minimizes Mean Squared Error (MSE):
$MSE = \frac{1}{n}\sum(y - \beta_0)^2$
and that this means our prediction of $\beta_0$ should be the mean, $\bar{y}$. This is also true for *linear* regression where we minimize MSE:
$MSE = \frac{1}{n}\sum(y - \hat{y})^2$
Also note that we can use MSE to *evaluate* linear regression (it, or some variant, is really the only way we have to evaluate linear regression's predictions).
We do not use MSE for logistic regression, however, mostly because we want something with a better first derivative. Instead of MSE, we often use *cross entropy* (also called *log loss*, we'll stick with cross entropy):
$L(\beta) = -\frac{1}{n} \sum y log(\hat{y}) + (1-y) log(1-\hat{y})$
This has several implications:
1. Just because "regression" is in the name, we do not use Mean Squared Error to derive or evaluation logistic regression.
2. Although we do use cross entropy to derive logistic regression, we do *not* use it to evaluate logistic regression. We tend to use error rate and other metrics to evaluate it (which we will discuss in a few chapters). For now, we will just use error rate.
## Logistic Regression with Continuous Feature (Synthetic Data)
As before, we're going to start with synthetic data to get our proverbial feet wet. Even here, generating synthetic data isn't as easy as it is for linear regression. We basically need to estimate the $p$ for each value of $x$ and then simulate it. The algorithm is something like this:
```
1. generate x using the standard normal distribution or binomial if categorical.
2. for each data point:
3. z = beta_0 + beta_1 * x
4. pr = 1/(1_exp(-z))
5. y = 1 if rand() < pr else 0
```
Of note, the logistic function does not output $\hat{y}$ as it does with linear regression. It outputs the estimated probability of $y=1$. We can take that probability and compare it to a threshold and assign $y = 0$ or $y = 1$. The $y$ above is the *real* y for the synthetic data.
```
np.random.seed(83474722)
data = {}
data["x"] = stats.norm.rvs(0, 1, 100)
data["z"] = 0.5 + data["x"] * 0.5
data["pr"] = list(map(lambda z: logistic(z), data["z"]))
data["y"] = list(map(lambda pr: 1 if np.random.uniform() < pr else 0, data["pr"]))
data = pd.DataFrame(data)
```
It's worth taking a bit more in-depth look at this data even though it's synthetic (or *because* it's synthetic). We generated $x$ from the standard Normal distribution: $x \sim N(0, 1)$. $z$, an intermediate step, is the actual linear model: $z = \beta_0 + \beta_1 x$ or $z = 0.5 + 0.5 x$.
As the earlier discussion mentions, we pass $z$ through the logistic function to bound it to the interval (0, 1). The result, $pr$, represents a probability. This is a conditional probability: $P(y=1|x)$. In order to find out the "true" $y$ for each $x$, we simulate that probability.
As we can see in the table below, we have $pr=0.663$ and $y=1$ (obs 1) as well as $pr=0.857$ and $y=0$. This is logistic regression's form of "error", "noise", or the "known unknowns and unknown unknowns".
```
data.head()
```
We could use a constant model as we have before:
```
np.mean(data.y)
```
No matter what $x$ is, we say there's a 57% probability that the value of $y$ is 1. Since 57% is over 50%, we could just guess that for any $x$, $\hat{y} = 1$. We would be right 57 of the time and wrong 43% of the time on average. 43% is the model's *error rate*.
Can we do better?
```
import sys
sys.path.append('../resources')
import fundamentals.models as models
result = models.logistic_regression("y ~ x", data = data)
models.simple_describe_lgr(result)
```
We do a *little* better. The error rate here is 42% instead of (1-0.57) or 43% but that's not very encouraging. Logistic Regression doesn't actually *have* an $R^2$ metric. What we have shown here is Efron's Pseudo $R^2$. It basically measures the same thing as interpretation #1 of the "real" $R^2$: it's the percent of the variability in $y$ explained by the model. Not very much.
Additionally, our estimates of the coefficients, $\beta_0$ and $\beta_1$, are pretty bad compared to the ground truth in the synthetic data. In the linear regression case we were able to recover them fairly easily. Why is the synthetic data so bad?
Note that our base probability is not really much different than a coin toss (57% versus 50%). Assume a given $x$ leads to a probability of 65%. We need a lot more examples of $x$ to calculate that 65%...if we only have a few, we may never actually observe the case where $y=1$.
What happens with the current data generator if we just generate more data, n=10,000 instead of n=100?
```
data = {}
data["x"] = stats.norm.rvs(0, 1, 10000)
data["z"] = 0.5 + data["x"] * 0.5
data["pr"] = list(map(lambda z: logistic(z), data["z"]))
data["y"] = list(map(lambda pr: 1 if np.random.uniform() < pr else 0, data["pr"]))
data = pd.DataFrame(data)
```
We can re-run our logistic regression on this data:
```
result1 = models.logistic_regression("y ~ x", data = data)
models.simple_describe_lgr(result1)
```
Our coefficient estimates are almost exactly the same as the ground truth. Still our error rate is 36.3% instead of 43.0%. This is probably as good as we can get. What happens if bump up the base probability a bit?
```
data = {}
data["x"] = stats.norm.rvs(0, 1, 10000)
data["z"] = 0.75 + data["x"] * 10
data["pr"] = list(map(lambda z: logistic(z), data["z"]))
data["y"] = list(map(lambda pr: 1 if np.random.uniform() < pr else 0, data["pr"]))
data = pd.DataFrame(data)
```
Let's look at this data. The probabilities of each observation are now either very near 0 or very near 1:
```
data.head()
```
The constant model shows a probability of 52.2% for $y=1$. This means it has an error rate of 47.8%!
```
np.mean(data.y)
```
What about our logistic regression model?
```
result2 = models.logistic_regression("y ~ x", data = data)
models.simple_describe_lgr(result2)
```
The coefficients are almost exact *and* the error rate is only 5.1%. The (pseudo) $R^2$ shows that our model explains 85% of the variation in $y$.
This set of experiments shows us a number of things. First, generating synthetic data is very useful for learning how your algorithms work. In fact, let's do one more experiment. Let's reduce the number of observations back to 100:
```
data = {}
data["x"] = stats.norm.rvs(0, 1, 10000)
data["z"] = 0.75 + data["x"] * 10
data["pr"] = list(map(lambda z: logistic(z), data["z"]))
data["y"] = list(map(lambda pr: 1 if np.random.uniform() < pr else 0, data["pr"]))
data = pd.DataFrame(data)
result3 = models.logistic_regression("y ~ x", data = data)
models.simple_describe_lgr(result3)
```
Here our error rate is still quite a bit lower but the estimates of our coefficients aren't as good. We need both a lot of data and clear underlying pattern *and* this pattern isn't as obvious as it is with linear regression.
## Logistic Regression with Real Data
When it comes to either numerical features or a binary categorical features, there is no difference between linear regression and logistic regression. We can have numerical features which will affect the slope and intercept of the line. We can have binary categorical features that will affect the intercept of the line. We can have interaction terms that will affect the slope of the line.
The main difference with linear regression is in the interpretation of the coefficients.
For logistic regression, the coefficients are log-odds and while some people are quite comfortable thinking in terms of log-odds, most are not. How do we convert them into something we can understand?
Let's begin the discussion by looking at real data. This data is from a study of villager behavior in Bangladesh. Wells were examined for natural arsenic contamination and villagers using wells with higher arsenic readings were encouraged to use other wells or dig new ones. The variables are:
* **switch** - yes (1) or no (0), did the respondent switch to a new well.
* **dist** - distance to the nearest safe well in meters.
* **arsenic** - arsenic level of the respondent’s well.
* **assoc** - does the respondent or a family member belong to a community association.
* **educ** - the educational attainment of the respondent in years.
Let's start out with a logistic regression model for $\hat{switch}$:
$P(\hat{switch}=1) = logistic^{-1}(\beta_0 + \beta_1 dist)$
although we really have something like:
$z = \beta_0 + \beta_1 dist$
$\hat{pr} = \frac{1}{1+e^{-z}}$
$\hat{y}$ = 1 if $\hat{pr}$ > 0.5 else 0
which is a bit more complicated to write each time.
```
wells = pd.read_csv( "../resources/data/arsenic.wells.tsv", sep=" ")
```
Let's check the representations:
```
wells.info()
```
There is nothing particularly startling here. Let's see a few values:
```
wells.head()
```
The base model (and error rate) are:
```
mean = np.mean(wells.switch)
print("P(switch=1) = {0:.2f} ({1:.2f})".format(mean, 1-mean))
```
The base model (often called the "null" model) is that $P(switch=1) = 0.58$ which leads to an error rate of 42%. Let's see what logistic regression can get us:
```
result = models.logistic_regression( "switch ~ dist", data = wells)
models.simple_describe_lgr(result)
```
So again, this is the real world and real data and sometimes you only get improvements such as these. Despite the showing on this data, logistic regression is a very powerful modeling technique.
We can see that the error rate and (pseudo) $R^2$ of this model aren't great but we're much more interested in interpreting the model coefficients. What do they mean?
### Intercept
The intercept in this case has a legitimate $dist = 0$ interpretation. If the alternative well is 0 meters away, what is the probability of switching?
We can use our previous identity and use the inverse logit (logistic) function:
```
logistic( 0.6038)
```
so the probability of switching is 64.7% if the safe well is zero meters away (that doesn't really bode well). If we were to run the logistic regression without any regressors, we could think of $\beta_0$ as the *prior* log odds. In essence, logistic regression is a function that calculates conditional probabilities based on the features instead of using a table.
Once you add features, $\beta_0$ is no longer a pure prior because it has been optimized in the presence of the other features and therefore is still a conditional probability...just with all the features at 0.
### Coefficients
Next, we can look at each coefficient (or in this case, the only coefficient). The model basically says there is a 0.0062 decrease in *log odds* (remember that the coefficients are not in probability space until transformed) for every meter to the nearest safe well. Now we have a problem. While $\beta_i$ as *log odds* is linear in $x_i$ (that's *all* the term "linear model" means), arbitrary transformations of $\beta_i$, $t(\beta_i)$ is not necessarily linear in $x_i$ and that is the case here. How do we get around this problem?
There are several options:
**No. 1 - Evaluate at the mean of the variable with a unit change.**
The mean value of dist(ance) is 48.33. If we evaluate our model using that value, we get:
$P(switch = 1)$ = $logit^{-1}(0.6038 - 0.0062 \times 48.33)$ = $logit^{-1}(0.304154)$ = 0.5755
And if we do the same thing again after adding 1 meter to the average distance, we get:
$P(switch = 1)$ = $logit^{-1}(0.6038 - 0.0062 \times 49.33)$ = $logit^{-1}(0.297954)$ = 0.5739
So...that's a decrease of about 0.0016 percentage points (or 0.27%) which isn't huge but then we only increased the difference by a little over 3 feet!
But, you need to be careful with mean scaled data (which we'll talk about in the next chapter). A unit change is equal to one entire standard deviation which may be an extremely large value...or an extremely small one.
```
a = logistic( 0.6038 - 0.0062 * 48.33)
b = logistic( 0.6038 - 0.0062 * 49.33)
print(a, b)
print(a - b)
```
If you have more than one $x_i$, you should set all of them to their mean values and then do a unit change for each variable separately.
Note that this is a good reason to mean *center* data in a logistic regression but not to mean *scale* it. The reason for not mean scaling is that:
1. The coefficients do have clear interpretations and relative magnitudes (changes in probability after transformation).
2. Mean *scaling* makes 1 unit equal to one standard deviation of the standard normal distribution. This might be a very, very large value in the variables actual domain which messes up the approximation.
**No. 2 - Calculate the derivative of the logistic and evaluate it at the mean of the variable.**
$\frac{\partial}{\partial x_i}logit^{-1}(\beta X) = \frac{\beta_i e^{\beta X}}{(1 + e^{\beta X})^2}$
but $\beta X$ (z) is just the log odds at the mean values of X (if X is indeed $[1.0, \bar{x}_1, \bar{x}_2, ... ,\bar{x}_n]$) so if we plug our value for the model evaluated at the mean into the derivative, we get:
$\frac{0.0062 e^{0.0062 \times 0.3042}}{(1 + e^{0.0062 \times 0.3042})^2} = 0.0016$
```
def logistic_slope_at( beta, z):
return (beta * np.exp( beta * z)) / (1.0 + np.exp( beta * z))**2
print(logistic_slope_at( 0.0062, 0.3042))
```
**No. 3 - Divide by 4 Rule**
$\beta_1 / 4 = 0.0062 / 4 = 0.00155$
It's just a rule of thumb but easy. It has the same general interpretation though...the change in probability from a unit change at the mean of the regressor. Again, this approach can get very funky with mean scaled data because a unit change is a full standard deviation which can actually be enormous or infinitesimal. Why does that work?
The slope of the logistic curve is maximized where the first derivative is zero or $\beta_0 + \beta_1 + x = 0$. We can solve for this as:
$\frac{beta_1e^0}{(1+e^0)^2}$
$\frac{\beta_1 \times 1}{(1+1)^2}$
$\frac{\beta_1}{4}$
This interpretation holds best in the context of mean values for the corresponding feature, $x$.
**No. 4 - Average Predictive Difference**
We can also average the probabilities over all of our data points for a specific change in each of our predictors. For a model with only one predictor, this amounts to the same thing as No. 1 so we will save this for later.
## Useful Transformations
We'll talk more about transformations in the next chapter. The main point of this chapter is to establish what linear and logistic regression are and how to interpret them. However, there is an especially useful transformation for logistic regression and that involves transforming the units.
This doesn't affect the quality of the model at all. It does, however, change how you interpret it. For example, the current units of $dist$ are meters. The probability of change per *meter* is pretty small. But what about the probability of changing per *ten meters*? That's nearly 33 feet.
```
wells["dist10"] = wells["dist"]/10
result = models.logistic_regression( "switch ~ dist10", data = wells)
models.simple_describe_lgr(result)
```
We can now reinterpret the model. The probability of switching decreases by 1.5% (-00.619/4 = -0.015475 or "Divide by 4" rule) at the average when the distance to the safe well increases by 10 meters.
It's interesting to note that Gelman used 100 to scale his data. The problem with this, in an interpetability sense, is that if you look at distance, the median distance to a safe well is 36.7 meters. The 3rd quartile is 64 meters. A 100 meter difference just doesn't figure prominently into the data even though the maximum distance was 339.5 meters. 10 meters seems like a reasonable unit in this case.
Again, this doesn't change how good the model is. But it does it easier to talk about than "15 100ths of a percent per meter".
## Plotting Logistic Regression
It's not quite as easy to plot a logistic regression as it is linear regression. If we just plot the data, we have:
```
figure = plt.figure(figsize=(10,6))
axes = figure.add_subplot(1, 1, 1)
xs = wells[ "dist10"]
ys = wells[ "switch"]
axes.scatter( xs, ys, color="dimgray", alpha=0.5)
betas = result[ "coefficients"]
zs = np.linspace( xs.min(), xs.max(), 100)
ps = [logistic( betas[ 0] + betas[ 1] * x) for x in zs]
axes.plot(zs, ps, '-', color="firebrick", alpha=0.75)
axes.set_title( result[ "formula"])
axes.set_xlabel("dist (10s of meters)")
axes.set_ylabel("switch")
plt.show()
plt.close()
```
It's just not very interesting or informative on its own. Additionally, you have the problem that logistic regression is nonlinear. We'll get into plotting multivariate regression (linear and logistic) in the next chapter. The solution for logistic regression is usually to plot the *decision boundary* in feature space and not to plot the target at all.
## Bootstrap Inference
As with linear regression, we can also apply bootstrap inference to logistic regression and with the same results. Here we only show the function in operation. We also make the innovation that we include "divide by 4" interpretations of our coefficients:
```
result = models.bootstrap_logistic_regression("switch ~ dist10", wells)
models.describe_bootstrap_lgr(result, 3)
```
The development so far has been pedagogical. You should always do bootstrap inference for both linear and logistic regression and going forward, we will.
## More than Two Outcomes
Binary classification shows up quite a bit: does it fail or not, does he have a heart attack or not, does she purchase it or not, does he click on it or not. But in many instances, the event is not actually binary. Death is inevitable. It's not "does he have a heart attack or not" but "does he die from a heart attack, cancer, flu, traffic accident, ...". It's not "does she purchase it or not" but "does she buy the shoes, the shop-vac, the weedwacker, ...". It's not "does he click on it or not" but "does he click on this, does he click on that, does he go back, ...".
This gives us some clue as to how to deal with multiclass classification problems.
First, there are classification algorithms that *can* deal with multiclass problems "directly". Decision trees are a good example.
Second, every algorithm that can handle only binary classification can also be made to handle multiclass classification. If the response variable has $n$ possible outcomes then you can train $n$ binary models where each is trained on "the class" and "not the class". For example, if there were three classes: buy, sell, hold. Then you first convert your data (temporarily) into "buy/not buy" and train a model, then convert to "sell/not sell" and train a model, then convert to "hold/not hold" and train a model:
$z_{buy} = \beta^{buy}_0 + \beta^{buy}_1 x_1$
from data where the classes are now "buy/don't buy".
$z_{sell} = \beta^{sell}_0 + \beta^{sell}_1 x_1$
from data where the classes are now "sell/don't sell"
$z_{hold} = \beta^{hold}_0 + \beta^{hold}_1 x_1$
from data where the classes are now "hold/don't hold"
This works as long as each model has the same $x_1$ (and this generalizes to more than one feature: $x_1$, $x_2$,...$x_n$). Actually, multinomial logistic regression does this under the covers for you...but things like Support Vector Machines do not and you do have to do it manually.
You now have a metamodel for multiclass classification. When you need to make a prediction, you use all three models and pick the class that has the highest probability. Strangely, this is essentially what neural networks must do as well.
Note that there's a different although related problem of *multilabel* classification. In this case, each observation might be assigned more than one outcome For example, a story might be fiction *and* sports while another one might non-fiction and sports. A discussion of this problem is beyond the scope of these notes.
| github_jupyter |
# More Pandas
```
# Load the necessary libraries
import pandas as pd
%matplotlib inline
```
## Vectorized String Operations
* There is a Pandas way of doing this that is much more terse and compact
* Pandas has a set of String operations that do much painful work for you
* Especially handling bad data!
```
data = ['peter', 'Paul', 'MARY', 'gUIDO']
for s in data:
print(s.capitalize())
```
* But like above, this breaks very easily with missing values
```
data = ['peter', 'Paul', None, 'MARY', 'gUIDO']
for s in data:
print(s.capitalize())
```
* The Pandas library has *vectorized string operations* that handle missing data
```
names = pd.Series(data)
names
names.str.capitalize()
```
* Look ma! No errors!
* Pandas includes a a bunch of methods for doing things to strings.
| | | | |
|-------------|------------------|------------------|------------------|
|``len()`` | ``lower()`` | ``translate()`` | ``islower()`` |
|``ljust()`` | ``upper()`` | ``startswith()`` | ``isupper()`` |
|``rjust()`` | ``find()`` | ``endswith()`` | ``isnumeric()`` |
|``center()`` | ``rfind()`` | ``isalnum()`` | ``isdecimal()`` |
|``zfill()`` | ``index()`` | ``isalpha()`` | ``split()`` |
|``strip()`` | ``rindex()`` | ``isdigit()`` | ``rsplit()`` |
|``rstrip()`` | ``capitalize()`` | ``isspace()`` | ``partition()`` |
|``lstrip()`` | ``swapcase()`` | ``istitle()`` | ``rpartition()`` |
#### Exercise
* In the cells below, try three of the string operations listed above on the Pandas Series `monte`
* Remember, you can hit tab to autocomplete and shift-tab to see documentation
```
monte = pd.Series(['Graham Chapman', 'John Cleese', 'Terry Gilliam',
'Eric Idle', 'Terry Jones', 'Michael Palin'])
monte
# First
# Second
# Third
```
## Example: Recipe Database
* Let's walk through the recipe database example from the Python Data Science Handbook
* There are a few concepts and commands I haven't yet covered, but I'll explain them as I go along
* Download the recipe file from [this link](https://s3.amazonaws.com/openrecipes/20170107-061401-recipeitems.json.gz) or run the cell below if you are on JupyterHub
```
recipes = pd.read_json("https://s3.amazonaws.com/openrecipes/20170107-061401-recipeitems.json.gz",
compression='gzip',
lines=True)
```
We have downloaded the data and loaded it into a dataframe directly from the web.
```
recipes.head()
recipes.shape
```
We see there are nearly 200,000 recipes, and 17 columns.
Let's take a look at one row to see what we have:
```
# display the first item in the DataFrame
recipes.iloc[0]
# Show the first five items in the DataFrame
recipes.head()
```
There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web.
In particular, the ingredient list is in string format; we're going to have to carefully extract the information we're interested in.
Let's start by taking a closer look at the ingredients:
```
# Summarize the length of the ingredients string
recipes['ingredients'].str.len().describe()
# which row has the longest ingredients string
recipes['ingredients'].str.len().idxmax()
# use iloc to fetch that specific row from the dataframe
recipes.iloc[135598]
# look at the ingredients string
recipes.iloc[135598]['ingredients']
```
* WOW! That is a lot of ingredients! That might need to be cleaned by hand instead of a machine
* What other questions can we ask of the recipe data?
```
# How many breakfasts?
recipes.description.str.contains('[Bb]reakfast').sum()
# How many have cinnamon as an ingredient?
recipes.ingredients.str.contains('[Cc]innamon').sum()
# How many misspell cinnamon as cinamon?
recipes.ingredients.str.contains('[Cc]inamon').sum()
```
---
## Merging Datasets
One of the tasks you will need to do for your final project, and in the wide world of data munging, is combining disparate datasets together into a single set.
### Merging the same Data
Sometimes you have the same data, but it has been broken up over multiple files (over time or some other distinction). Ultimately what you want is a single dataframe that contains all the data from separate files (or dataframes). Let's load some data into three separate dataframes and then smoosh them together.
```
# Load the data for April, May, and June
april_url = "https://data.wprdc.org/datastore/dump/043af2a6-b58f-4a2e-ba5f-7ef868d3296b"
may_url = "https://data.wprdc.org/datastore/dump/487813ec-d7bc-4ff4-aa74-0334eb909142"
june_url = "https://data.wprdc.org/datastore/dump/d7fd722c-9980-4f7a-a7b1-d1a55a365697"
april_acj_data = pd.read_csv(april_url)
may_acj_data = pd.read_csv(may_url)
june_acj_data = pd.read_csv(june_url)
# inspect the dataframes
april_acj_data.head()
# inspect the dataframes
may_acj_data.head()
# inspect the dataframes
june_acj_data.head()
```
As you can see, we have three dataframes with the Allegheny County Jail census for three months. All of the columns are the same so the merge will be relatively straightforward, we just have to concatinate the three dataframes together. Following the Pandas documentation on [Merging, joining, and concatinating object](https://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-objects), I will use the [`concat()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html#pandas.concat) function to smoosh the three dataframes into a single dataframe.
```
# put the dataframes I want to smoosh together into a python list
monthly_dataframes = [april_acj_data, may_acj_data, june_acj_data]
# use the concat fuction to put them together into a new dataframe
ajc_data = pd.concat(monthly_dataframes)
# sample 5 random rows from the dataframe so I can (hopefully) see entries
# from each of the three months
ajc_data.sample(5)
```
Use the `concat()` function to merge identical datasets together. But what if your data don't line up? What do you do then?
### Merging different data with overlapping columns
The [PGH 311 Data](https://data.wprdc.org/dataset/311-data) provides a good example for connecting datasets that don't line up, but are still connectable because they share columns. First, let's load up the 311 data.
```
file_path = "
# Load the 311 data into a dataframe
url = "https://data.wprdc.org/datastore/dump/76fda9d0-69be-4dd5-8108-0de7907fc5a4"
pgh_311_data = pd.read_csv(url)
pgh_311_data.head()
```
Now one of the things I like to do with the 311 data is count requests by type.
```
# count all the unique values in the column REQUEST_TYPE
pgh_311_data['REQUEST_TYPE'].value_counts()
# make a HUGE horizontal bar chart so we can see the distribution of 311 complaints
# it took me a bunch of guesses to figure out the right figure size
pgh_311_data['REQUEST_TYPE'].value_counts(ascending=True).plot.barh()
# make a HUGE horizontal bar chart so we can see the distribution of 311 complaints
# it took me a bunch of guesses to figure out the right figure size
pgh_311_data['REQUEST_TYPE'].value_counts(ascending=True).plot.barh(figsize=(10,50))
```
Sweet! But there are 284 different types of requests, this is not very useful. Fortunately the 311 data has a [code book](https://data.wprdc.org/dataset/311-data/resource/7794b313-33be-4a8b-bf80-41751a59b84a) that rolls the request types into a set of higher level categories. Note, the code book is a Microsoft Excel file so we got to use the `read_excel()` function instead of `read_csv()`.
```
# load the 311 data code book
url = "https://data.wprdc.org/dataset/a8f7a1c2-7d4d-4daa-bc30-b866855f0419/resource/7794b313-33be-4a8b-bf80-41751a59b84a/download/311-codebook-request-types.xlsx"
pgh_311_codes = pd.read_excel(url) # parse the excel sheet
pgh_311_codes.sample(10) # pull ten random rows
```
So we loaded the codebook into a separate dataframe and if we look at it we can see how the `REQUEST_TYPE` from the data corresponds to `Issues` in the code book. Additionally, we can see how there is a higher level `Catagory` associated with each issue in the code book.
```
# find the row for "Potholes"
query = pgh_311_codes['Issue'] == 'Potholes'
pgh_311_codes[query]
# find the row for "Weeds/Debris"
query = pgh_311_codes['Issue'] == 'Weeds/Debris'
pgh_311_codes[query]
# find the row for "Building Maintenance
query = pgh_311_codes['Issue'] == 'Building Maintenance'
pgh_311_codes[query]
```
If you look at the data you will notice that both "Weeds/Debris" and "Building Maintenance" belong to the same category of "Neighborhood Issues." Using this mapping we can hopefully make a bit more sense of the data.
Now what we need to do is `merge()` the data. We can look to the [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging) to provide some explaination about how use use this fuction to combine two datasets with overlapping columns with different names.
In our case what we want to do is *merge* the codebook into the 311 data and add a new column for the category.
```
# merge the two dataframes on the REQUEST_TYPE and ISSUE columns
pgh_311_data_merged = pgh_311_data.merge(pgh_311_codes, left_on="REQUEST_TYPE", right_on="Issue")
pgh_311_data_merged.sample(10)
# count the numbers of unique values in the Category column
pgh_311_data_merged['Category'].value_counts()
```
This is a much more managably set of categorical values!
```
# make a bar chart of the categories for the merged data
pgh_311_data_merged['Category'].value_counts(ascending=True).plot.barh(figsize=(10,10))
```
Now we can dive into specific categories and find out more.
```
# create a query mask for rows where the Category is equal to the value "Road/Street Issues"
query = pgh_311_data_merged['Category'] == "Road/Street Issues"
# find the rows matching the query, select the Issue column and count the unique values
pgh_311_data_merged[query]['Issue'].value_counts()
# create a query mask for rows where the Category is equal to the value "Road/Street Issues"
query = pgh_311_data_merged['Category'] == "Road/Street Issues"
# find the rows matching the query, select the Issue column and count the unique values and make a bar chart
pgh_311_data_merged[query]['Issue'].value_counts(ascending=True).plot.barh(figsize=(10,10))
```
---
An isolated groupby from the Counting categorical values example
* count the attendance per center
```
# Do the same thing with pands
center_attendance_pandas.groupby('center_name')['attendance_count'].sum().sort_values(ascending=False)
```
---
## Pivoting Data
Let's look at one of the most exciting datasets in the WPRDC, the [Daily Community Center Attendance records](https://data.wprdc.org/dataset/daily-community-center-attendance)! WOWOW!
```
data_url = "https://data.wprdc.org/datastore/dump/b7cb30c8-b179-43ff-8655-f24880b0f578"
# load data and read in the date column as the row index
data = pd.read_csv(data_url, index_col="date", parse_dates=True)
data = data.drop(columns="_id")
data.head()
# What does the data look like?
data.plot()
```
We can pivot the data so the center names are columns and each row is the number of people attending that community center per day. This is basically rotating the data.
```
# Use the pivot function to make column values into columns
data.pivot(columns="center_name", values="attendance_count").head()
data.head(10)
```
That is a lot of NaN, and not the tasty garlicy kind either.
We might want to break this apart for each Community Center. We can start by inspecting the number rows per center.
```
# count the number of rows per center
data.groupby("center_name").count()
```
There are a lot of community centers that don't have a lot of numbers because either 1) they are not very popular or 2) they don't report their daily attendance (more likely given how man NaNs we saw above).
What we will do is create a custom filter function that we will apply to ever row in the dataframe using the [groupby filter function](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.filter.html). This is some knarly stuff we are doing here. This isn't the plain old filter function, this is a special filter fuction (part of the groupby functionality) that requires you to create a special function to apply to each row. In our case we will make a little function that takes a value and tests to see if it is create than a threshold value (in our case 1000).
```
# create a function we will use to perform a filtering
# operation on the data
# filter out centers that have less then 1000 total entries
def filter_less_than(x, threshold):
#print(x)
if len(x) > threshold:
return True
else:
return False
# def filter_less_than(x):
# if len(x) > 1000:
# return True
# else:
# return False
# use the custom function to filter out rows
popular_centers = data.groupby("center_name").filter(filter_less_than,
threshold=1000)
# look at what centers are in the data now
popular_centers.groupby("center_name").count()
# plot the popular community centers
popular_centers.plot()
# Use the pivot function to make rows into columns with only the popular community centers
pivoted_data = popular_centers.pivot(columns="center_name", values="attendance_count")
pivoted_data.head()
```
Still NaN-y, but not as bad. Now lets see what these data look like.
```
# plot the data
pivoted_data.plot(figsize=(10,10))
```
Look at the [cumulative sum](http://www.variation.com/cpa/help/hs108.htm) to see if the attendance is above or below average.
```
# compute the cumulative sum for every column and make a chart
pivoted_data.cumsum().plot(figsize=(10,10))
```
Looks like Brookline is the winner here, but attendance has tapered off in the past couple years.
```
# Resample and compute the monthly totals for the popular community centers
pivoted_data.resample("M").sum().plot(figsize=(10,10))
```
Looks like monthly is too messy, maybe by year?
```
# yearly resample to monthly, compute the totals, and plot
pivoted_data.resample("Y").sum().plot(figsize=(10,10))
data.pivot(columns="center_name", values="attendance_count").resample("Y").sum().plot(figsize=(20,10))
```
Looking at the total number of attendance per year per popular community center gives us a bit more information.
---
## Split, Apply, Combine with numeric data
* The 311 complaints are mainly categorical data, which doesn't let use do more mathematical aggregations
* Lets grab a different dataset from the WPRDC, the [Allegheny County Jail Daily Census](https://data.wprdc.org/dataset/allegheny-county-jail-daily-census)
```
# Grab three months of data
january17_jail_census = pd.read_csv("https://data.wprdc.org/datastore/dump/3b5d9c45-b5f4-4e05-9cf1-127642ad1d17",
parse_dates=True,
index_col='Date')
feburary17_jail_census = pd.read_csv("https://data.wprdc.org/datastore/dump/cb8dc876-6285-43a8-9db3-90b84eedb46f",
parse_dates=True,
index_col='Date')
march17_jail_census = pd.read_csv("https://data.wprdc.org/datastore/dump/68645668-3f89-4831-b1de-de1e77e52dd3",
parse_dates=True,
index_col='Date')
january17_jail_census.head()
# Use the concat function to combine all three into one dataframe
# Remember I need to make a list of the all the dataframes for
# the concat fuction
jail_census = pd.concat([january17_jail_census,
feburary17_jail_census,
march17_jail_census])
jail_census
# remove the "_id" column because it is not useful
jail_census.drop("_id", axis=1, inplace=True)
jail_census
# get just the first day in Feburary 2017
jail_census.loc["2017-02-01"]
# Compute the average age ate booking by gender for Febuary 1st, 2017
jail_census.loc['2017-02-01'].groupby('Gender')['Age at Booking'].mean()
# compute the average age at booking by race for Febuary 1st, 2017
jail_census.loc['2017-02-01'].groupby('Race')['Age at Booking'].mean()
```
If we look at the [data dictionary](https://data.wprdc.org/dataset/allegheny-county-jail-daily-census/resource/f0550174-16b0-4f6e-88dc-fa917e74b56c) we can see the following mapping for race categories
```
Race of Inmate
A-ASIAN OR PACIFIC ISLANDER
B-BLACK OR AFRICAN AMERICAN
H-HISPANIC
I-AMERICAN INDIAN OR ALASKAN NATIVE
U-UNKNOWN
W-WHITE
```
The `x` category hasn't been described.
```
# how many total rows in the dataset have "x" for race
jail_census['Race'].value_counts()['x']
# Get the statistical summary of age at booking by gender for Febuary 1st, 2017
jail_census.loc['2017-02-01'].groupby('Gender')['Age at Booking'].describe()
# Compute the difference between Age at Booking and current age
age_difference = jail_census.loc['2017-02-01']['Current Age'] - jail_census.loc['2017-02-01']['Age at Booking']
age_difference.value_counts()
# Compute the average age for each day
jail_census.resample("D").mean()
# What is with that NaNs?
jail_census.loc['2017-03-19']
# visualize the number of inmates
jail_census.resample("D").size().plot()
```
---
## Parsing Time
Often there is date/time data in one of the columns of your dataset. In this case `CREATED_ON` appears to be a date/time for when the 311 complaint was lodged. Unless you specify `parse_dates=True` when you call the read_csv, you will need to re-parse your date/time column into the correct datatype.
For example, if we look at the datatypes for all of the columns in our potholes in Bloomfield dataset we can see been parsed as dates.
```
# inspect the datatypes for each column in the data
bloomfield_pothole_data.info()
```
Let's fix that! First we parse the `CREATED_ON` column using the `to_datetime()` function. What this does is loop over every value in the column and convert it to a datetime data type.
**Important Note**: Even though we just want to look the potholes for Bloomfield, we need to do this operation on the full data, not the subselection of potholes in Bloomfield. Why? It has to do with the way Pandas manages the data behind the scenes, our `bloomfield_pothole_data` is actually a *view* into the larger dataframe, `pgh_311_data_merged`. This means we should change the original data because then we'll see it reflected in our bloomfield/potholes subset. Changing the originaly data as opposed to our subset is also good practice because we might want to look at the temporal distribution for other types of 311 request or other neighborhoods.
```
# replace the CREATED_ON column with parsed dates
pgh_311_data_merged['CREATED_ON'] = pd.to_datetime(pgh_311_data_merged['CREATED_ON'])
pgh_311_data_merged.info()
```
Sweet, now that Pandas is aware of dates we can start doing operations on that data.
```
# ReCreate a query mask for potholes
query_potholes = pgh_311_data_merged['REQUEST_TYPE'] == "Potholes"
# Create a query mask for bloomfield
query_bloomfield = pgh_311_data_merged['NEIGHBORHOOD'] == "Bloomfield"
# create a new dataframe that queries potholes AND bloomfield
bloomfield_pothole_data = pgh_311_data_merged[query_potholes & query_bloomfield]
# inspect the new dataframe
print(bloomfield_pothole_data.shape)
bloomfield_pothole_data.head()
# notice the datatype has changed in our subset of the data
bloomfield_pothole_data.info()
# make a temporal index by setting it equal to CREATED_ON
bloomfield_pothole_data.index = bloomfield_pothole_data['CREATED_ON']
bloomfield_pothole_data.info()
# Resample (grouping) by month ("M") and counting the number of complaints
bloomfield_pothole_data['REQUEST_ID'].resample("M").count().plot(figsize=(10,6))
```
It looks like Bloomfield had a MASSIVE spike in pothole complaints this past winter. You can see there is a pattern, it is lowest right before the new year and then it springs in the spring and then it falls again in the fall.
---
### Merging Data
* Bringing disparate datasets together is one of the more powerful features of Pandas
* Like with Python lists, you can `append()` and `concat()` Pandas `Series` and `Dataframes`
* These functions work best for simple cases
```
# concatinate two series together
ser1 = pd.Series(['A', 'B', 'C'], index=[1, 2, 3])
ser2 = pd.Series(['D', 'E', 'F'], index=[4, 5, 6])
pd.concat([ser1, ser2])
# concatinate two dataframes
df1 = pd.DataFrame({"A":["A1", "A2"],
"B":["B1","B2"]},index=[1,2])
df2 = pd.DataFrame({"A":["A3", "A4"],
"B":["B3","B4"]},index=[3,4])
pd.concat([df1,df2])
# concatinate dataframes horizontally
df1 = pd.DataFrame({"A":["A1", "A2"],
"B":["B1","B2"]},index=[1,2])
df2 = pd.DataFrame({"C":["C1", "C2"],
"D":["D1","D2"]},index=[1,2])
pd.concat([df1,df2], axis=1)
# What happens when indexes don't line up
df1 = pd.DataFrame({"A":["A1", "A2"],
"B":["B1","B2"]},index=[1,2])
df2 = pd.DataFrame({"A":["A3", "A4"],
"B":["B3","B4"]},index=[3,4])
pd.concat([df1,df2], axis=1)
# create a hierarchical index
df1 = pd.DataFrame({"A":["A1", "A2"],
"B":["B1","B2"]},index=[1,2])
df2 = pd.DataFrame({"A":["A3", "A4"],
"B":["B3","B4"]},index=[3,4])
pd.concat([df1,df2], keys=["df1", 'df2'])
```
### Merging and Joining
* While `concat()` is useful it lacks the power to do complex data merging
* For example, I have two tables of different data but one overlapping column
* This is where the `merge()` function becomes useful because it lets you *join* datasets
* The concept of "join" has lots of theory and is a richly developed method for *joining* data
#### One-to-one joins
```
# create two dataframes with one shared column
df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'],
'group': ['Accounting', 'Engineering', 'Engineering', 'HR']})
df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'],
'hire_date': [2004, 2008, 2012, 2014]})
# display df1
df1
# display df2
df2
# merge df1 and df2 intzo a new dataframe df3
df3 = pd.merge(df1, df2)
df3
```
* The new dataframe `df3` now has all of the data from df1 and df2
* The `merge` function automatically connected the two tables on the "employees" column
* But what happens when your data don't line up?
#### Many-to-one joins
* Sometimes there isn't a one to one relationshp between rows in the two datasets
* A *many-to-one* join lets you combine these datasets
```
df3
# make another dataframe about the supervisor for each group
df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'],
'supervisor': ['Carly', 'Guido', 'Steve']})
df4
# Merge df3 from above with the supervisor info in df4
pd.merge(df3,df4)
```
* Notice how the information about Guido, the manager for Engineering, is repeated.
* While this might seem like duplicated data, it makes it easier to quickly look up Jake and Lisa's supervisor without consulting multiple tables
#### Many-to-many joins
* Let's combine the employee information with skills information
* Notice there isn't a one to one or even a one to many relationship between these tables
* Each group can have multiple skills, so **what do you think will happen?**
```
# Use the employee table specified above
df1
# create a new dataframe with skills information
df5 = pd.DataFrame({'group': ['Accounting', 'Accounting',
'Engineering', 'Engineering', 'HR', 'HR', 'Librarian'],
'skills': ['math', 'spreadsheets', 'coding', 'linux',
'spreadsheets', 'organization', 'nunchucks']})
df5
pd.merge(df1, df5)
```
* Amazing, Pandas merge capabilities are very useful
* But what do you do if the names of your columns don't match?
* You could change column names...
* But that is crazy! Just use the `left_on` and `right_on` parameters to the `merge()` function
```
# Use the employee table specified above
df1
# Create a new salary table, but use "name" instead of "employee" for the column index
df3 = df3 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'],
'salary': [70000, 80000, 120000, 90000]})
df3
# lets try and merge them without specifying what to merge on
pd.merge(df1, df3)
```
* What are the column names I should specify?
```
# Now lets specify the column name
pd.merge(df1, df3, left_on="employee", right_on="name" )
```
* Notice we now have a redundant employee/name column, this is a by-product of merging different columns
* If you want to get rid of it you can use the `drop` method
```
# drop the name column, axis=1 means axis='col', which is confusing
pd.merge(df1, df3, left_on="employee", right_on="name" ).drop("name", axis=1)
```
---
## Plotting with Pandas
* You can plot directly from `pandas` data structures
* Pandas [has its own interface](https://pandas.pydata.org/pandas-docs/stable/visualization.html#) to matplotlib tied directly to the `Series` and `Dataframe` data structures
```
# We need to import numpy for generating random data
import numpy as np
```
* **Important!** You need the following code to render plots inside of Jupyter
```
# Tell matplotlib to render visualizations in the notebook
%matplotlib inline
# create some random data
x = np.linspace(0, 10, 100)
# put that data into a dataframe
df = pd.DataFrame({"y":np.sin(x), "z":np.cos(x)}, index=x)
df.head()
# Plot the data using the plot method
df.plot();
```
* Basically, you can add a `.plot()` to the end of any Pandas datastructure and it will make a best guess as to the best way to visualize it.
```
# Plot data in a Series with the plot method
pd.Series(np.random.randint(0,10,10)).plot();
```
* However, be careful calling `.plot()` all willy nilly since it doesn't always produce sensible results
```
# create some random time series data and create a default plot
random_series = pd.Series(np.random.randn(1000),
index=pd.date_range('1/1/2000', periods=1000))
random_series.plot();
```
* What is cool is you can often use the `.plot()` method after performing some computation on the data
* For example, we can calculate the [cumulative sum](http://www.variation.com/cpa/help/hs108.htm) (the cumulative sum of differences between the values and the average)
* Sloping up means above average, sloping down means below average
```
# Plot the cumulative sum of a Series
random_series.cumsum().plot()
```
* The `.plot()` trick also works with Dataframes
```
# create dataframe with four columns and create a default plot
df = pd.DataFrame(np.random.randn(1000, 4), index=random_series.index, columns=list('ABCD'))
df.head()
# just plot the dataframe and see what happens
df.plot();
```
* Messy! Let's try the cumulative sum trick and see if that looks any better
```
# Plot the cumulative sum of each column
df.cumsum().plot();
```
* With pandas you can specify the kind of visualization with the `kind` parameter to `plot()`
* The default isn't always what you want
```
# plot the sum of the columns
df.sum().plot()
```
* This is a *bad* visualization, the line imply an ordered relationship between the four categories
* Let's use a bar chart instead
```
# plot the sum of the columns as bars
df.sum().plot(kind='bar')
```
* Almost got it, but the labels on the x axis are a bit wonky.
* Let's look at the documentation and see if we can find a fix
#### Quick Exercise
* Find the documentation for the `plot()` method of a Pandas `Series`
* *HINT*: Try Googling
* What parameter will fix the x labels so they are easier to read?
```
animals = pd.Series([1,5,2,5], index=["cats", "dogs", "chickens", "spiders"])
animals.plot(kind="bar");
# answer to the exercise
animals = pd.Series([1,5,2,5], index=["cats", "dogs", "chickens", "spiders"])
animals.plot(kind="bar", rot=0);
```
### Pandas Plot types
* Pandas provides a quick and easy interface to a bunch of different plot types
* You don't even have to load `matplotlib` (although you do need `%matplotlib inline`)
* The secret to plotting is Googling, looking at other people's code, and trying things until it works
* At least, that is how I do it
* What is nice about pandas/matplotlib integration is pandas will handle a lot of the boilerplate code for you
* Then you pass parameters to the `plot()` method to determine how the graph should look
```
# create some random categorical data
df2 = pd.DataFrame(np.random.randint(1,100,size=(7,4)),
columns=['Carbs', 'Fats', 'Proteins', 'Other'],
index=["M","Tu","W","Th","F","Sa","Su",])
# Plot a bar chart
df2.plot(kind="bar")
```
* Bar charts can also be called directly using the `bar()` function
```
df2.plot.bar()
```
* There are a bunch of parameters for these methdos that let you tweak the vizualization
* For example, the `stacked` parameter stacks the categorical values so you can easily compare within and across categories
```
df2.plot.bar(stacked=True, rot=0)
```
#### Exercise
* Try experimenting with the other plot types
* Do they make sense for these data?
```
# move the cursor to the right of the period and hit tab
df2.plot.
# try another plot type
# move the cursor to the right of the period and hit tab
df2.plot.
```
---
### Working with Time
* One of the most powerful features of Pandas is its time series functionality
* Dates and time are a Python and Pandas data type (like integers and strings)
* By using the `datetime` data types you can do advanced, time-centric analysis
* One thing to remember about computers is they are *very* specific
* *Time stamps* - a specific moment in time (July 4th, 2017 at 7:52am and 34 seconds)
* *Time intervals* - a length of time with start and end points (The year 2017)
* *Time duration* - a specific length of time (a year, a month, a day)
```
# Datetime in pure Python
import datetime
date = datetime.datetime(year=2017, month=6, day=13)
date
type(date)
# what is that date's month?
date.month
# what is that date's day?
date.day
# use the parser function in the datautil library to parse human dates
from dateutil import parser
date = parser.parse("4th of July, 2017")
date
# get the month
date.month
```
#### Exercise
Try some different date strings, see how smart Python can be.
```
my_date = parser.parse("<your date string here")
my_date
```
* You can use [*string format codes*](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) for printing dates and time in different formats (especially useful for making human readable dates)
* Pass a format string to the `strftime()` method to print out a pretty date
```
# Get the weekday
date.strftime("%A")
date.strftime("%B")
## Try some of the different string format codes and see what happens
date.
## Try combining a few of them together with punctuation too
date.
```
### Working with time in Pandas
* Just like how Pandas has its own datatypes for numbers, Pandas has its own dates and times (to support more granularity)
* If you have a lot of dates, it is often useful to use the Pandas functions over the native Python functions
* Pandas is most powerful when you index by time using the `DatetimeIndex`
```
# Create a Series with a DateTime index
index = pd.DatetimeIndex(['2014-03-04', '2014-08-04',
'2015-04-04', '2015-09-04',
'2016-01-01', '2016-02-16'])
data = pd.Series([0, 1, 2, 3, 4, 5], index=index)
data
```
* Now that the index is made of DateTimes we can index using date strings
* Note, this only works on strings
```
# grab the value for a specific day
data["2015-04-04"]
# grab a slice between two dates
data['2014-08-01':'2016-01']
# give me everything from 2015
data['2015']
```
* Pandas has some functions to make parsing dates easy too
```
# use the to_datetime function instead of the parser function
date = pd.to_datetime("4th of July, 2017")
date
# use string format codes to get the weekday
date.strftime("%A")
# give me today's date
today = pd.to_datetime("today")
today
```
* That is the day, but also the exact time...
* Timestamps must always be a specific moment
#### Exercise
* Use the [*string format codes*](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to print today's date in the "YYYY-MM-DD" format. HINT: You will have to combine multiple codes and dashes
```
# Replace the ??? with the write string format code
print(today.strftime("???"))
```
### Playing with time on real data
* Let's look at the [311 data for the city of Pittsburgh](https://data.wprdc.org/dataset/311-data) from the WPRDC
* Did you know, you can give the URL directly to Pandas!
```
# load the 311 data directly from the WPRDC
pgh_311_data = pd.read_csv("311_data.csv")
pgh_311_data.head()
```
* Ok, now we have the data, but we need it to be indexed by date
* **What column has the date information?**
* **What format do you think that column is currently in?**
* **What function might we use to convert that column into dates?**
```
pgh_311_data['CREATED_ON'].head()
# convert the "CREATED_ON" column to dates
pd.to_datetime(pgh_311_data['CREATED_ON']).head()
```
* We can convert the "CREATED_ON" column to Pandas `datetime` objects
* Now we have to set that to the dataframe's index
```
# set the index of pgh_311_data to be the parsed dates in the "CREATED_ON" column
pgh_311_data.index = pd.to_datetime(pgh_311_data['CREATED_ON'])
pgh_311_data.head()
```
* Do'h, now we have CREATED_ON twice, that isn't very tidy
* We can also skip this extra conversion step entirely by specifying the index column and date parsing in `read_csv()` function call.
```
# load the 311 data directly from the WPRDC and parse dates directly
pgh_311_data = pd.read_csv("311_data.csv",
index_col="CREATED_ON",
parse_dates=True)
pgh_311_data.head()
pgh_311_data.info()
```
* Now that the dataframe has been indexed by time we can select 311 complains by time
```
# Select 311 complaints on January 1st, 2016
pgh_311_data['2016-01-01']
# Select the times just around the new years celebration
pgh_311_data["2015-12-31 20:00:00":"2016-01-01 02:00:00"]
```
* Someone clearly had a very roudy new years
#### Exercise
* Using the timeseries index selection, select the complaints made today
* Bonus, try and write your code so it will work on any day you execute it
* *hint*: try the `pd.datetime('today')`
* *Another hint*: Remember the DateTime gives you the exact time
* *Yet another hint*: Datetime indexing only works with string representations
```
# Write your code here
pgh_311_data[]
# create a Pandas datetime for today
today = pd.to_datetime("today")
formatted_today_string = today.strftime("%Y-%m-%d")
print(today)
print(formatted_today_string)
# use Pandas date string indexing to retrieve all rows for this today's date
todays_311s = pgh_311_data[formatted_today_string]
todays_311s
```
### Grouping time with the resample method
* Instead of using the `groupby()` method, you use the `resample()` method to *split* time into groups
* Then you can *apply* the regular aggregation functions
```
# compute the mean of complaints per quarter...note this doesn't make sense, but works anyway
pgh_311_data.resample("Q").mean()
# count the number of complaints per month
pgh_311_data.resample("M").count()
```
* Ok, these data are *begging* to be visualized, so I'm going to give you a teaser of next week
```
# load up the data visualization libraries
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set()
# Create a graph of the monthly complaint counts
pgh_311_data['REQUEST_ID'].resample("M").count().plot()
```
Try the code above, but re-sampling based upon different date periods. The strings for specifying an offset are located [here](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases)
```
# Try a different resampling here
# Try yet another resampling here
```
* OK, we've done some "fun" stuff with Time, but maybe we want to start doing deeper analysis
* To do that, we need to know what all these columns mean?
* Fortunately, this dataset has a [data dictionary](https://data.wprdc.org/dataset/311-data/resource/d3e98904-4a86-45fb-9041-0826ab8d56d0), which provides a bit more information.
---
## Querying Data
* It is sometimes helpful to think of a Pandas Dataframe as a little database.
* There is data and information stored in the Pandas Dataframe (or Series) and you want to *retrieve* it.
* Pandas has multiple mechanisms for getting specific bits of data and information from its data structures. The most common is to use *masking* to select just the rows you want.
* Masking is a two stage process, first you create a sequence of boolean values based upon a conditional expression--which you can think of as a "query"--and then you index your dataframe using that boolean sequence.
```
center_attendance_pandas.head(10)
```
* What if we only wanted to see attendance for Brookline Community Center
```
query = center_attendance_pandas["center_name"] == "Brookline Community Center"
brookline_center_attendance = center_attendance_pandas[query]
brookline_center_attendance.head(10)
# create queries for brookline and greenfield
brookline_query = center_attendance_pandas["center_name"] == "Brookline Community Center"
greenfield_query = center_attendance_pandas["center_name"] == "Magee Community Center"
# use the boolean OR operator to select both community centers
center_attendance_pandas[brookline_query | greenfield_query]
```
---
### Exploring the 311 Data
* Now we can use what we have learned to do some exploratory data analysis on the 311 data
* First, lets use the `sample()` method to grab 10 random rows so we can get a feel for the data
```
# Sample 10 random rows from the dataframe
pgh_311_data.sample(10)
```
#### Exercise
* What are the possible *origins* of complaints?
* How many complaints are coming from each source?
*HINT*: Scroll back up to the top of to look at the Dataframes refresher.
```
pgh_311_data['REQUEST_ORIGIN'].value_counts()
```
#### Exercise
* *Group* the complaints *by* neighborhood and get the *size* of each group
```
pgh_311_data.groupby('NEIGHBORHOOD').size()
# Note, for just counting the groupby and value_counts are equivalent
# There is more than one way to skin the cat (or panda)
pgh_311_data['NEIGHBORHOOD'].value_counts()
```
#### Exercise
* *Group* the complaints *by* type and get the *count* for each group
```
pgh_311_data.groupby("REQUEST_TYPE")['REQUEST_TYPE'].count()
```
This categorical data is far too granular.
Fortunately, if we look at the [311 Data](https://data.wprdc.org/dataset/311-data) we can see there is a [311 Issue and Category Codebook](https://data.wprdc.org/dataset/311-data/resource/40ddfbed-f225-4320-b4d2-7f1e09da72a4). Click on that link and check out the Google Sheets preview of that data.
https://data.wprdc.org/dataset/311-data/resource/40ddfbed-f225-4320-b4d2-7f1e09da72a4
What we need to do is download the CSV from Google Sheets directly into a Pandas dataframe, but this is actually a bit tricky because Google won't easily give us a link to the CSV file.
```
# I googled "pandas dataframe from google sheets"
# and found a solution on Stackoverflow
# https://stackoverflow.com/a/35246041
issue_category_mapping = pd.read_csv('https://docs.google.com/spreadsheets/d/' +
'1DTDBhwXj1xQG1GCBKPqivlzHQaLh2HLd0SjN1XBPUw0' +
'/export?gid=0&format=csv')
issue_category_mapping.head(5) # Same result as @TomAugspurger
```
#### Exercise
* Merge the `pgh_311_data` with the `issue_category_mapping` so we can count the number of complaints per category
* *HINT*: You will need to specify the `left_on` and `right_on` parameters
```
# create a new merged dataframe
merged_311_data = pd.merge(pgh_311_data,
issue_category_mapping,
left_on="REQUEST_TYPE",
right_on="Issue")
merged_311_data.head()
# get rid of redundant columns
merged_311_data.drop(['Definition','Department', 'Issue'],
axis=1,
inplace=True)
merged_311_data.head()
```
#### Exercise
* Now that we have category data, count the number of complaints by category
```
merged_311_data.groupby("Category")['Category'].count().sort_values(ascending=False)
merged_311_data.groupby("Category").size().sort_values(ascending=False)
```
* Selecting data in a Dateframe
```
# Select only rows where NEIGHBORHOOD equals "Greenfield" and then count how many complaints came from each source
merged_311_data[merged_311_data['NEIGHBORHOOD'] == 'Greenfield'].groupby('REQUEST_ORIGIN').size()
```
---
Challenge
Querying and subsetting - Masking. Variables
EXERCISE - time indexing and subsetting data and visuzlaing that subset
Create a sub dataset of just the potholes in a neighborhood
Create a plot of number of pothole complaints per month of that data subset
```
#Create a sub dataset of just the potholes in Highland Park
potholes = data_311['REQUEST_TYPE'] == "Potholes"
highland_park= data_311['NEIGHBORHOOD'] == "Highland Park"
highlandpark_potholes = data_311[potholes & highland_park]
print(highlandpark_potholes.shape)
highlandpark_potholes.head()
#Plot the number of pothole compaints per month of that data set
#Change created on to a date-time object
#WHERE SHOULD WE PUT THIS????
data_311['CREATED_ON'] = pd.to_datetime(data_311['CREATED_ON'])
#Change the index to a date time object
highlandpark_potholes.index = highlandpark_potholes['CREATED_ON']
#Plot the figure
highlandpark_potholes['REQUEST_ID'].resample("M").count().plot(title="Highland Park Potholes", figsize=(10,6))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/lmiroslaw/DeOldify/blob/master/VideoColorizerColab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### **<font color='blue'> Video Colorizer </font>**
#◢ DeOldify - Colorize your own videos!
_FYI: This notebook is intended as a tool to colorize gifs and short videos, if you are trying to convert longer video you may hit the limit on processing space. Running the Jupyter notebook on your own machine is recommended (and faster) for larger video sizes._
####**Credits:**
Big special thanks to:
Robert Bell for all his work on the video Colab notebook, and paving the way to video in DeOldify!
Dana Kelley for doing things, breaking stuff & having an opinion on everything.
---
#◢ Verify Correct Runtime Settings
**<font color='#FF000'> IMPORTANT </font>**
In the "Runtime" menu for the notebook window, select "Change runtime type." Ensure that the following are selected:
* Runtime Type = Python 3
* Hardware Accelerator = GPU
#◢ Git clone and install DeOldify
```
!git clone https://github.com/lmiroslaw/DeOldify.git DeOldify
cd DeOldify
```
#◢ Setup
```
#NOTE: This must be the first call in order to work properly!
from deoldify import device
from deoldify.device_id import DeviceId
#choices: CPU, GPU0...GPU7
device.set(device=DeviceId.GPU0)
import torch
if not torch.cuda.is_available():
print('GPU not available.')
from os import path
!pip install -r colab_requirements.txt
import fastai
from deoldify.visualize import *
from pathlib import Path
torch.backends.cudnn.benchmark=True
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*?Your .*? set is empty.*?")
!mkdir 'models'
!wget https://data.deepai.org/deoldify/ColorizeVideo_gen.pth -O ./models/ColorizeVideo_gen.pth
!wget https://github.com/lmiroslaw/DeOldify/blob/4b5da3d12ee3d02eeda1155ebcc85025f1d7fb4b/resource_images/chckmk.png -O ./resource_images/watermark.png
colorizer = get_video_colorizer()
```
#◢ Instructions
### source_url
Type in a url hosting a video from YouTube, Imgur, Twitter, Reddit, Vimeo, etc. Many sources work! GIFs also work. Full list here: https://ytdl-org.github.io/youtube-dl/supportedsites.html NOTE: If you want to use your own video, upload it first to a site like YouTube.
### render_factor
The default value of 21 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the video is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality film in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality videos and inconsistencies (flashy render) will generally be reduced, but the colors may get slightly washed out.
### watermarked
Selected by default, this places a watermark icon of a palette at the bottom left corner of the image. This is intended to be a standard way to convey to others viewing the image that it is colorized by AI. We want to help promote this as a standard, especially as the technology continues to improve and the distinction between real and fake becomes harder to discern. This palette watermark practice was initiated and lead by the company MyHeritage in the MyHeritage In Color feature (which uses a newer version of DeOldify than what you're using here).
### How to Download a Copy
Simply right click on the displayed video and click "Save video as..."!
## Pro Tips
1. If a video takes a long time to render and you're wondering how well the frames will actually be colorized, you can preview how well the frames will be rendered at each render_factor by using the code at the bottom. Just stop the video rendering by hitting the stop button on the cell, then run that bottom cell under "See how well render_factor values perform on a frame here". It's not perfect and you may still need to experiment a bit especially when it comes to figuring out how to reduce frame inconsistency. But it'll go a long way in narrowing down what actually works.
2. If videos are taking way too much time for your liking, running the Jupyter notebook VideoColorizer.ipynb on your own machine (with DeOldify installed) will generally be much faster (as long as you have the hardware for it).
3. Longer videos (running multiple minutes) are going to have a rough time on Colabs. You'll be much better off using a local install of DeOldify instead in this case.
## Troubleshooting
The video player may wind up not showing up, in which case- make sure to wait for the Jupyter cell to complete processing first (the play button will stop spinning). Then follow these alternative download instructions
1. In the menu to the left, click Files
2. If you don't see the 'DeOldify' folder, click "Refresh"
3. By default, rendered video will be in /DeOldify/video/result/
If a video you downloaded doesn't play, it's probably because the cell didn't complete processing and the video is in a half-finished state.
#◢ Colorize!!
```
source_url = '' #@param {type:"string"}
render_factor = 21 #@param {type: "slider", min: 5, max: 40}
watermarked = True #@param {type:"boolean"}
if source_url is not None and source_url !='':
video_path = colorizer.colorize_from_url(source_url, 'video.mp4', render_factor, watermarked=watermarked)
show_video_in_notebook(video_path)
else:
print('Provide a video url and try again.')
```
#◢ Download
```
import google.colab.files.download('video/result/video1.mp4')
```
## See how well render_factor values perform on a frame here
```
for i in range(10,40,2):
colorizer.vis.plot_transformed_image('video/bwframes/video/00001.jpg', render_factor=i, display_render_factor=True, figsize=(8,8))
```
---
#⚙ Recommended video and gif sources
* [/r/Download Result/](./DeOldify/video/result/video.mp4)
* [/r/Nickelodeons/](https://www.reddit.com/r/Nickelodeons/)
* [r/silentmoviegifs](https://www.reddit.com/r/silentmoviegifs/)
* https://twitter.com/silentmoviegifs
| github_jupyter |
# $\lambda$对CMA性能影响研究
<link rel="stylesheet" href="http://yandex.st/highlightjs/6.2/styles/googlecode.min.css">
<script src="http://code.jquery.com/jquery-1.7.2.min.js"></script>
<script src="http://yandex.st/highlightjs/6.2/highlight.min.js"></script>
<script>hljs.initHighlightingOnLoad();</script>
<script type="text/javascript">
$(document).ready(function(){
$("h2,h3,h4,h5,h6").each(function(i,item){
var tag = $(item).get(0).localName;
$(item).attr("id","wow"+i);
$("#category").append('<a class="new'+tag+'" href="#wow'+i+'">'+$(this).text()+'</a></br>');
$(".newh2").css("margin-left",0);
$(".newh3").css("margin-left",20);
$(".newh4").css("margin-left",40);
$(".newh5").css("margin-left",60);
$(".newh6").css("margin-left",80);
});
});
</script>
<div id="category"></div>
**摘要**: $\lambda$大小影响单次计算时间,根据文档合理的$\lambda$在[5,2n+10]之间,Hansen给出的推荐值为$4+3\times \lfloor ln(N) \rfloor$,本文固定mu=0.5,sigma=0.3,根据不同的$\lambda$对不同函数绘图分析.
### 第一阶段测试
* 函数:[rosen,bukin,griewank]
* 最小值:[0,6.82,0]
* 维度:[130]
* $\lambda$:[5,18,20,50,80,110,140]
```
%pylab inline
import pandas as pd
from pandas import Series, DataFrame
import pickle
plt.rc('figure', figsize=(12, 8))
with open("data.tl",'r') as f:
result_list=pickle.load(f)
def convertdic(result_list):
res=[{}]
for row in result_list:
for i,d in enumerate(res):
if row[-1] not in d.keys():
d[row[-1]]=row[:-1]
break
if i==len(res)-1:
res.append({row[-1]:row[:-1]})
break
return res
def draw(title,tail):
bs=[row[:tail] for row in result_list if row[tail]==title]
bs=np.array(bs)
lmax=max(bs[:,-1])
bs=bs/bs.max(0)
bs=bs*[1,1,1,1,lmax]
bs=convertdic(bs)
df=DataFrame(bs[0],index=['countiter','countevals','result','time(s)'])
df=df.stack().unstack(0)
df.columns.name='values'
df.index.name='lambda'
df.plot(kind='bar',stacked=False,colormap='jet',alpha=0.9,title=title,figsize=(12,8));
df.plot(kind='area',stacked=False,colormap='jet',alpha=0.5,title=title,figsize=(12,8),xticks=np.arange(5,lmax,10));
def drawSigmaLines(t,xl):
sigmas=[[row[-3],row[-1]] for row in result_list if row[-2]==t]
ss=map(list,zip(*sigmas))[1]
M=max(map(len,ss))
for s in sigmas:
for i in range(M-len(s[1])):
s[1].append(None)
df1=DataFrame({s[0]:s[1] for s in sigmas})
df1.columns.name='sigma'
df1.index.name='lambda'
df1.plot(title=t,fontsize=10,linewidth=2,alpha=0.8,colormap='rainbow',xlim=(0,xl))
#bukin函数
draw('bukin',-1)
#rosen函数
draw('rosen',-1)
#griwank函数
draw('griewank',-1)
```
### 第二阶段测试
* 函数:[sphere,cigar,elli]
* 最小值:[0,0,0]
* 维度:[208]
* $\lambda$:[5,10,14,18,20,22,26,60,100,140,180,220]
```
with open("data1.tl",'r') as f:
result_list=pickle.load(f)
#sphere函数
draw('sphere',-2)
drawSigmaLines('sphere',300)
#cigar函数
draw('cigar',-2)
drawSigmaLines('cigar',300)
#elli函数
draw('elli',-2)
drawSigmaLines('elli',300)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_BayesianDecisions/W3D1_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Bonus Tutorial : Fitting to data
**Week 3, Day 1: Bayesian Decisions**
**By Neuromatch Academy**
__Content creators:__ Vincent Valton, Konrad Kording
__Content reviewers:__ Matt Krause, Jesse Livezey, Karolina Stosio, Saeed Salehi, Michael Waskom
##**Note: This is bonus material, included from NMA 2020. It has not been substantially revised for 2021.**
This means that the notation and standards are slightly different and some of the references to other days in NMA are outdated. We include it here because it covers fitting Bayesian models to data, which may be of interest to many students.
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial objectives
In the first two tutorials, we learned about Bayesian models and decisions more intuitively, using demos. In this notebook, we will dive into using math and code to fit Bayesian models to data.
We'll have a look at computing all the necessary steps to perform model inversion (estimate the model parameters such as $p_{common}$ that generated data similar to that of a participant). We will describe all the steps of the generative model first, and in the last exercise we will use all these steps to estimate the parameter $p_{common}$ of a single participant using simulated data.
The generative model will be a Bayesian model we saw in Tutorial 2: a mixture of Gaussian prior and a Gaussian likelihood.
Steps:
* First, we'll create the prior, likelihood, posterior, etc in a form that will make it easier for us to visualise what is being computed and estimated at each step of the generative model:
1. Creating a mixture of Gaussian prior for multiple possible stimulus inputs
2. Generating the likelihood for multiple possible stimulus inputs
3. Estimating our posterior as a function of the stimulus input
4. Estimating a participant response given the posterior
* Next, we'll perform the model inversion/fitting:
5. Create an distribution for the input as a function of possible inputs
6. Marginalization
7. Generate some data using the generative model provided
8. Perform model inversion (model fitting) using the generated data and see if you recover the orignal parameters.
---
# Setup
Please execute the cell below to initialize the notebook environment
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from scipy.optimize import minimize
#@title Figure Settings
import ipywidgets as widgets
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle")
# @title Helper Functions
def my_gaussian(x_points, mu, sigma):
"""
Returns a Gaussian estimated at points `x_points`, with parameters: `mu` and `sigma`
Args :
x_points (numpy arrays of floats)- points at which the gaussian is evaluated
mu (scalar) - mean of the Gaussian
sigma (scalar) - std of the gaussian
Returns:
Gaussian evaluated at `x`
"""
p = np.exp(-(x_points-mu)**2/(2*sigma**2))
return p / sum(p)
def moments_myfunc(x_points, function):
"""
DO NOT EDIT THIS FUNCTION !!!
Returns the mean, median and mode of an arbitrary function
Args :
x_points (numpy array of floats) - x-axis values
function (numpy array of floats) - y-axis values of the function evaluated at `x_points`
Returns:
(tuple of 3 scalars): mean, median, mode
"""
# Calc mode of arbitrary function
mode = x_points[np.argmax(function)]
# Calc mean of arbitrary function
mean = np.sum(x_points * function)
# Calc median of arbitrary function
cdf_function = np.zeros_like(x_points)
accumulator = 0
for i in np.arange(x_points.shape[0]):
accumulator = accumulator + function[i]
cdf_function[i] = accumulator
idx = np.argmin(np.abs(cdf_function - 0.5))
median = x_points[idx]
return mean, median, mode
def plot_myarray(array, xlabel, ylabel, title):
""" Plot an array with labels.
Args :
array (numpy array of floats)
xlabel (string) - label of x-axis
ylabel (string) - label of y-axis
title (string) - title of plot
Returns:
None
"""
fig = plt.figure()
ax = fig.add_subplot(111)
colormap = ax.imshow(array, extent=[-10, 10, 8, -8])
cbar = plt.colorbar(colormap, ax=ax)
cbar.set_label('probability')
ax.invert_yaxis()
ax.set_xlabel(xlabel)
ax.set_title(title)
ax.set_ylabel(ylabel)
ax.set_aspect('auto')
return None
def plot_my_bayes_model(model) -> None:
"""Pretty-print a simple Bayes Model (ex 7), defined as a function:
Args:
- model: function that takes a single parameter value and returns
the negative log-likelihood of the model, given that parameter
Returns:
None, draws plot
"""
x = np.arange(-10,10,0.07)
# Plot neg-LogLikelihood for different values of alpha
alpha_tries = np.arange(0.01, 0.3, 0.01)
nll = np.zeros_like(alpha_tries)
for i_try in np.arange(alpha_tries.shape[0]):
nll[i_try] = model(np.array([alpha_tries[i_try]]))
plt.figure()
plt.plot(alpha_tries, nll)
plt.xlabel('p_independent value')
plt.ylabel('negative log-likelihood')
# Mark minima
ix = np.argmin(nll)
plt.scatter(alpha_tries[ix], nll[ix], c='r', s=144)
#plt.axvline(alpha_tries[np.argmin(nll)])
plt.title('Sample Output')
plt.show()
return None
def plot_simulated_behavior(true_stim, behaviour):
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(1,1,1)
ax.set_facecolor('xkcd:light grey')
plt.plot(true_stim, true_stim - behaviour, '-k', linewidth=2, label='data')
plt.axvline(0, ls='dashed', color='grey')
plt.axhline(0, ls='dashed', color='grey')
plt.legend()
plt.xlabel('Position of true visual stimulus (cm)')
plt.ylabel('Participant deviation from true stimulus (cm)')
plt.title('Participant behavior')
plt.show()
return None
```
---
# Introduction
```
# @title Video 1: Intro
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV13g4y1i7je", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YSKDhnbjKmA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```

Here is a graphical representation of the generative model:
1. We present a stimulus $x$ to participants.
2. The brain encodes this true stimulus $x$ noisily (this is the brain's representation of the true visual stimulus: $p(\tilde x|x)$.
3. The brain then combine this brain encoded stimulus (likelihood: $p(\tilde x|x)$) with prior information (the prior: $p(x)$) to make up the brain's estimated position of the true visual stimulus, the posterior: $p(x|\tilde x)$.
3. This brain's estimated stimulus position: $p(x|\tilde x)$, is then used to make a response: $\hat x$, which is the participant's noisy estimate of the stimulus position (the participant's percept).
Typically the response $\hat x$ also includes some motor noise (noise due to the hand/arm move being not 100% accurate), but we'll ignore it in this tutorial and assume there is no motor noise.
We will use the same experimental setup as in [tutorial 2](https://colab.research.google.com/drive/15pbgrfGjSKbUQoX51RdcNe3UXb4R5RRx#scrollTo=tF5caxVGYURh) but with slightly different probabilities. This time, participants are told that they need to estimate the sound location of a puppet that is hidden behind a curtain. The participants are told to use auditory information and are also informed that the sound could come from 2 possible causes: a common cause (95% of the time it comes from the puppet hidden behind the curtain at position 0), or an independent cause (5% of the time the sound comes from loud-speakers at more distant locations).
---
# Section 1: Likelihood array
First, we want to create a likelihood, but for the sake of visualization (and to consider all possible brain encodings) we will create multiple likelihoods $f(x)=p(\tilde x|x)$ (one for each potential encoded stimulus: $\tilde x$). We will then be able to visualize the likelihood as a function of hypothesized true stimulus positions: $x$ on the x-axis and encoded position $\tilde x$ on the y-axis.
Using the equation for the `my_gaussian` and the values in `hypothetical_stim`:
* Create a Gaussian likelihood with mean varying from `hypothetical_stim`, keeping $\sigma_{likelihood}$ constant at 1.
* Each likelihood will have a different mean and thus a different row-likelihood of your 2D array, such that you end up with a likelihood array made up of 1,000 row-Gaussians with different means. (_Hint_: `np.tile` won't work here. You may need a for-loop).
* Plot the array using the function `plot_myarray()` already pre-written and commented-out in your script
### Exercise 1. Implement the auditory likelihood as a function of true stimulus position
```
x = np.arange(-10, 10, 0.1)
hypothetical_stim = np.linspace(-8, 8, 1000)
def compute_likelihood_array(x_points, stim_array, sigma=1.):
# initializing likelihood_array
likelihood_array = np.zeros((len(stim_array), len(x_points)))
# looping over stimulus array
for i in range(len(stim_array)):
########################################################################
## Insert your code here to:
## - Generate a likelihood array using `my_gaussian` function,
## with std=1, and varying the mean using `stim_array` values.
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
########################################################################
likelihood_array[i, :] = ...
return likelihood_array
# Uncomment following lines to test your code
# likelihood_array = compute_likelihood_array(x, hypothetical_stim)
# plot_myarray(likelihood_array,
# '$x$ : Potential true stimulus $x$',
# 'Possible brain encoding $\~x$',
# 'Likelihood as a function of $\~x$ : $p(\~x | x)$')
# to_remove solution
x = np.arange(-10, 10, 0.1)
hypothetical_stim = np.linspace(-8, 8, 1000)
def compute_likelihood_array(x_points, stim_array, sigma=1.):
# initializing likelihood_array
likelihood_array = np.zeros((len(stim_array), len(x_points)))
# looping over stimulus array
for i in range(len(stim_array)):
likelihood_array[i, :] = my_gaussian(x_points, stim_array[i], sigma)
return likelihood_array
likelihood_array = compute_likelihood_array(x, hypothetical_stim)
with plt.xkcd():
plot_myarray(likelihood_array,
'$x$ : Potential true stimulus $x$',
'Possible brain encoding $\~x$',
'Likelihood as a function of $\~x$ : $p(\~x | x)$')
```
---
# Section 2: Causal mixture of Gaussian prior
```
# @title Video 2: Prior array
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1WA411e7gM", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="F0IYpUicXu4", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
As in Tutorial 2, we want to create a prior that will describe the participants' prior knowledge that, 95% of the time sounds come from a common position around the puppet, while during the remaining 5% of the time, they arise from another independent position. We will embody this information into a prior using a mixture of Gaussians. For visualization reasons, we will create a prior that has the same shape (form) as the likelihood array we created in the previous exercise. That is, we want to create a mixture of Gaussian prior as a function the the brain encoded stimulus $\tilde x$. Since the prior does not change as a function of $\tilde x$ it will be identical for each row of the prior 2D array.
Using the equation for the Gaussian `my_gaussian`:
* Generate a Gaussian $Common$ with mean 0 and standard deviation 0.5
* Generate another Gaussian $Independent$ with mean 0 and standard deviation 10
* Combine the two Gaussians (Common + Independent) to make a new prior by mixing the two Gaussians with mixing parameter $p_{independent}$ = 0.05. Make it such that the peakier Gaussian has 95% of the weight (don't forget to normalize afterwards)
* This will be the first row of your prior 2D array
* Now repeat this for varying brain encodings $\tilde x$. Since the prior does not depend on $\tilde x$ you can just repeat the prior for each $\tilde x$ (hint: use np.tile) that row prior to make an array of 1,000 (i.e. `hypothetical_stim.shape[0]`) row-priors.
* Plot the matrix using the function `plot_myarray()` already pre-written and commented-out in your script
### Exercise 2: Implement the prior array
```
x = np.arange(-10, 10, 0.1)
def calculate_prior_array(x_points, stim_array, p_indep,
prior_mean_common=.0, prior_sigma_common=.5,
prior_mean_indep=.0, prior_sigma_indep=10):
"""
'common' stands for common
'indep' stands for independent
"""
prior_common = my_gaussian(x_points, prior_mean_common, prior_sigma_common)
prior_indep = my_gaussian(x_points, prior_mean_indep, prior_sigma_indep)
############################################################################
## Insert your code here to:
## - Create a mixture of gaussian priors from 'prior_common'
## and 'prior_indep' with mixing parameter 'p_indep'
## - normalize
## - repeat the prior array and reshape it to make a 2D array
## of 1000 rows of priors (Hint: use np.tile() and np.reshape())
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
############################################################################
prior_mixed = ...
prior_mixed /= ... # normalize
prior_array = np.tile(...).reshape(...)
return prior_array
p_independent=.05
# Uncomment following lines, once the task is complete.
# prior_array = calculate_prior_array(x, hypothetical_stim, p_independent)
# plot_myarray(prior_array,
# 'Hypothesized position $x$', 'Brain encoded position $\~x$',
# 'Prior as a fcn of $\~x$ : $p(x|\~x)$')
# to_remove solution
x = np.arange(-10, 10, 0.1)
def calculate_prior_array(x_points, stim_array, p_indep,
prior_mean_common=.0, prior_sigma_common=.5,
prior_mean_indep=.0, prior_sigma_indep=10):
"""
'common' stands for common
'indep' stands for independent
"""
prior_common = my_gaussian(x_points, prior_mean_common, prior_sigma_common)
prior_indep = my_gaussian(x_points, prior_mean_indep, prior_sigma_indep)
prior_mixed = (1 - p_indep) * prior_common + (p_indep * prior_indep)
prior_mixed /= np.sum(prior_mixed) # normalize
prior_array = np.tile(prior_mixed, len(stim_array)).reshape(len(stim_array), -1)
return prior_array
p_independent=.05
prior_array = calculate_prior_array(x, hypothetical_stim, p_independent)
with plt.xkcd():
plot_myarray(prior_array,
'Hypothesized position $x$', 'Brain encoded position $\~x$',
'Prior as a fcn of $\~x$ : $p(x|\~x)$')
```
---
# Section 3: Bayes rule and Posterior array
```
# @title Video 3: Posterior array
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV18K411H7Tc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="HpOzXZUKFJc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
We now want to calcualte the posterior using *Bayes Rule*. Since we have already created a likelihood and a prior for each brain encoded position $\tilde x$, all we need to do is to multiply them row-wise. That is, each row of the posterior array will be the posterior resulting from the multiplication of the prior and likelihood of the same equivalent row.
Mathematically:
\begin{eqnarray}
Posterior\left[i, :\right] \propto Likelihood\left[i, :\right] \odot Prior\left[i, :\right]
\end{eqnarray}
where $\odot$ represents the [Hadamard Product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (i.e., elementwise multiplication) of the corresponding prior and likelihood row vectors `i` from each matrix.
Follow these steps to build the posterior as a function of the brain encoded stimulus $\tilde x$:
* For each row of the prior and likelihood (i.e. each possible brain encoding $\tilde x$), fill in the posterior matrix so that every row of the posterior array represents the posterior density for a different brain encode $\tilde x$.
* Plot the array using the function `plot_myarray()` already pre-written and commented-out in your script
Optional:
* Do you need to operate on one element--or even one row--at a time? NumPy operations can often process an entire matrix in a single "vectorized" operation. This approach is often much faster and much easier to read than an element-by-element calculation. Try to write a vectorized version that calculates the posterior without using any for-loops. _Hint_: look at `np.sum` and its keyword arguments.
### Exercise 3: Calculate the posterior as a function of the hypothetical stimulus x
```
def calculate_posterior_array(prior_array, likelihood_array):
############################################################################
## Insert your code here to:
## - calculate the 'posterior_array' from the given
## 'prior_array', 'likelihood_array'
## - normalize
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
############################################################################
posterior_array = ...
posterior_array /= ... # normalize each row separately
return posterior_array
# Uncomment following lines, once the task is complete.
# posterior_array = calculate_posterior_array(prior_array, likelihood_array)
# plot_myarray(posterior_array,
# 'Hypothesized Position $x$',
# 'Brain encoded Stimulus $\~x$',
# 'Posterior as a fcn of $\~x$ : $p(x | \~x)$')
# to_remove solution
def calculate_posterior_array(prior_array, likelihood_array):
posterior_array = prior_array * likelihood_array
posterior_array /= posterior_array.sum(axis=1, keepdims=True) # normalize each row separately
return posterior_array
posterior_array = calculate_posterior_array(prior_array, likelihood_array)
with plt.xkcd():
plot_myarray(posterior_array,
'Hypothesized Position $x$',
'Brain encoded Stimulus $\~x$',
'Posterior as a fcn of $\~x$ : $p(x | \~x)$')
```
---
# Section 4: Estimating the position $\hat x$
```
# @title Video 4: Binary decision matrix
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1sZ4y1u74e", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="gy3GmlssHgQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Now that we have a posterior distribution (for each possible brain encoding $\tilde x$)that represents the brain's estimated stimulus position: $p(x|\tilde x)$, we want to make an estimate (response) of the sound location $\hat x$ using the posterior distribution. This would represent the subject's estimate if their (for us as experimentalist unobservable) brain encoding took on each possible value.
This effectively encodes the *decision* that a participant would make for a given brain encoding $\tilde x$. In this exercise, we make the assumptions that participants take the mean of the posterior (decision rule) as a response estimate for the sound location (use the function `moments_myfunc()` provided to calculate the mean of the posterior).
Using this knowledge, we will now represent $\hat x$ as a function of the encoded stimulus $\tilde x$. This will result in a 2D binary decision array. To do so, we will scan the posterior matrix (i.e. row-wise), and set the array cell value to 1 at the mean of the row-wise posterior.
**Suggestions**
* For each brain encoding $\tilde x$ (row of the posterior array), calculate the mean of the posterior, and set the corresponding cell of the binary decision array to 1. (e.g., if the mean of the posterior is at position 0, then set the cell with x_column == 0 to 1).
* Plot the matrix using the function `plot_myarray()` already pre-written and commented-out in your script
### Exercise 4: Calculate the estimated response as a function of the hypothetical stimulus x
```
def calculate_binary_decision_array(x_points, posterior_array):
binary_decision_array = np.zeros_like(posterior_array)
for i in range(len(posterior_array)):
########################################################################
## Insert your code here to:
## - For each hypothetical stimulus x (row of posterior),
## calculate the mean of the posterior using the povided function
## `moments_myfunc()`, and set the corresponding cell of the
## Binary Decision array to 1.
## Hint: you can run 'help(moments_myfunc)' to see the docstring
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
########################################################################
# calculate mean of posterior using 'moments_myfunc'
mean, _, _ = ...
# find the postion of mean in x_points (closest position)
idx = ...
binary_decision_array[i, idx] = 1
return binary_decision_array
# Uncomment following lines, once the task is complete.
# binary_decision_array = calculate_binary_decision_array(x, posterior_array)
# plot_myarray(binary_decision_array,
# 'Chosen position $\hat x$', 'Brain-encoded Stimulus $\~ x$',
# 'Sample Binary Decision Array')
# to_remove solution
def calculate_binary_decision_array(x_points, posterior_array):
binary_decision_array = np.zeros_like(posterior_array)
for i in range(len(posterior_array)):
# calculate mean of posterior using 'moments_myfunc'
mean, _, _ = moments_myfunc(x_points, posterior_array[i])
# find the postion of mean in x_points (closest position)
idx = np.argmin(np.abs(x_points - mean))
binary_decision_array[i, idx] = 1
return binary_decision_array
binary_decision_array = calculate_binary_decision_array(x, posterior_array)
with plt.xkcd():
plot_myarray(binary_decision_array,
'Chosen position $\hat x$', 'Brain-encoded Stimulus $\~ x$',
'Sample Binary Decision Array')
```
---
# Section 5: Probabilities of encoded stimuli
```
# @title Video 5: Input array
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1pT4y1E7wv", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="C1d1n_Si83o", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Because we as experimentalists can not know the encoding $\tilde x$ of the stimulus $x$ that we do know, we had to compute the binary decision array for each possible encoding.
First however, we need to calculate how likely each possible encoding is given the true stimulus. That is, we will now create a Gaussian centered around the true presented stimulus, with $\sigma = 1$, and repeat that gaussian distribution across as a function of potentially encoded values $\tilde x$. That is, we want to make a *column* gaussian centered around the true presented stimulus, and repeat this *column* Gaussian across all hypothetical stimulus values $x$.
This, effectively encodes the distribution of the brain encoded stimulus (one single simulus, which we as experimentalists know) and enable us to link the true stimulus $x$, to potential encodings $\tilde x$.
**Suggestions**
For this exercise, we will assume the true stimulus is presented at direction 2.5
* Create a Gaussian likelihood with $\mu = 2.5$ and $\sigma = 1.0$
* Make this the first column of your array and repeat that *column* to fill in the true presented stimulus input as a function of hypothetical stimulus locations.
* Plot the array using the function `plot_myarray()` already pre-written and commented-out in your script
### Exercise 5: Generate an input as a function of hypothetical stimulus x
```
def generate_input_array(x_points, stim_array, posterior_array,
mean=2.5, sigma=1.):
input_array = np.zeros_like(posterior_array)
########################################################################
## Insert your code here to:
## - Generate a gaussian centered on the true stimulus 2.5
## and sigma = 1. for each column
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
########################################################################
for i in range(len(x_points)):
input_array[:, i] = ...
return input_array
# Uncomment following lines, once the task is complete.
# input_array = generate_input_array(x, hypothetical_stim, posterior_array)
# plot_myarray(input_array,
# 'Hypothetical Stimulus $x$', '$\~x$',
# 'Sample Distribution over Encodings:\n $p(\~x | x = 2.5)$')
# to_remove solution
def generate_input_array(x_points, stim_array, posterior_array,
mean=2.5, sigma=1.):
input_array = np.zeros_like(posterior_array)
for i in range(len(x_points)):
input_array[:, i] = my_gaussian(stim_array, mean, sigma)
return input_array
input_array = generate_input_array(x, hypothetical_stim, posterior_array)
with plt.xkcd():
plot_myarray(input_array,
'Hypothetical Stimulus $x$', '$\~x$',
'Sample Distribution over Encodings:\n $p(\~x | x = 2.5)$')
```
---
# Section 6: Normalization and expected estimate distribution
```
# @title Video 6: Marginalization
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1qz4y1D71K", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5alwtNS4CGw", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Now that we have a true stimulus $x$ and a way to link it to potential encodings, we will be able to calculate the distribution of encodings and ultimately estimates. To integrate over all possible hypothetical values of $\tilde x$ we marginalize, that is, we first compute the dot-product from the true presented stimulus and our binary decision array and then sum over x.
Mathematically, this means that we want to compute:
\begin{eqnarray}
Marginalization Array = Input Array \odot Binary Decision Array
\end{eqnarray}
\begin{eqnarray}
Marginal = \int_{\tilde x} Marginalization Array
\end{eqnarray}
Since we are performing integration over discrete values using arrays for visualization purposes, the integration reduces to a simple sum over $\tilde x$.
**Suggestions**
* For each row of the input and binary arrays, calculate product of the two and fill in the 2D marginal array.
* Plot the result using the function `plot_myarray()` already pre-written and commented-out in your script
* Calculate and plot the marginal over `x` using the code snippet commented out in your script
- Note how the limitations of numerical integration create artifacts on your marginal
### Exercise 6: Implement the marginalization matrix
```
def my_marginalization(input_array, binary_decision_array):
############################################################################
## Insert your code here to:
## - Compute 'marginalization_array' by multiplying pointwise the Binary
## decision array over hypothetical stimuli and the Input array
## - Compute 'marginal' from the 'marginalization_array' by summing over x
## (hint: use np.sum() and only marginalize along the columns)
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
############################################################################
marginalization_array = ...
marginal = ... # note axis
marginal /= ... # normalize
return marginalization_array, marginal
# Uncomment following lines, once the task is complete.
# marginalization_array, marginal = my_marginalization(input_array, binary_decision_array)
# plot_myarray(marginalization_array, 'estimated $\hat x$', '$\~x$', 'Marginalization array: $p(\^x | \~x)$')
# plt.figure()
# plt.plot(x, marginal)
# plt.xlabel('$\^x$')
# plt.ylabel('probability')
# plt.show()
# to_remove solution
def my_marginalization(input_array, binary_decision_array):
marginalization_array = input_array * binary_decision_array
marginal = np.sum(marginalization_array, axis=0) # note axis
marginal /= marginal.sum() # normalize
return marginalization_array, marginal
marginalization_array, marginal = my_marginalization(input_array, binary_decision_array)
with plt.xkcd():
plot_myarray(marginalization_array, 'estimated $\hat x$', '$\~x$', 'Marginalization array: $p(\^x | \~x)$')
plt.figure()
plt.plot(x, marginal)
plt.xlabel('$\^x$')
plt.ylabel('probability')
plt.show()
```
---
# Generate some data
We have seen how to calculate the posterior and marginalize to remove $\tilde x$ and get $p(\hat{x} \mid x)$. Next, we will generate some artificial data for a single participant using the `generate_data()` function provided, and mixing parameter $p_{independent} = 0.1$.
Our goal in the next exercise will be to recover that parameter. These parameter recovery experiments are a powerful method for planning and debugging Bayesian analyses--if you cannot recover the given parameters, something has gone wrong! Note that this value for $p_{independent}$ is not quite the same as our prior, which used $p_{independent} = 0.05.$ This lets us test out the complete model.
Please run the code below to generate some synthetic data. You do not need to edit anything, but check that the plot below matches what you would expect from the video.
```
#@title
#@markdown #### Run the 'generate_data' function (this cell)
def generate_data(x_stim, p_independent):
"""
DO NOT EDIT THIS FUNCTION !!!
Returns generated data using the mixture of Gaussian prior with mixture
parameter `p_independent`
Args :
x_stim (numpy array of floats) - x values at which stimuli are presented
p_independent (scalar) - mixture component for the Mixture of Gaussian prior
Returns:
(numpy array of floats): x_hat response of participant for each stimulus
"""
x = np.arange(-10,10,0.1)
x_hat = np.zeros_like(x_stim)
prior_mean = 0
prior_sigma1 = .5
prior_sigma2 = 3
prior1 = my_gaussian(x, prior_mean, prior_sigma1)
prior2 = my_gaussian(x, prior_mean, prior_sigma2)
prior_combined = (1-p_independent) * prior1 + (p_independent * prior2)
prior_combined = prior_combined / np.sum(prior_combined)
for i_stim in np.arange(x_stim.shape[0]):
likelihood_mean = x_stim[i_stim]
likelihood_sigma = 1
likelihood = my_gaussian(x, likelihood_mean, likelihood_sigma)
likelihood = likelihood / np.sum(likelihood)
posterior = np.multiply(prior_combined, likelihood)
posterior = posterior / np.sum(posterior)
# Assumes participant takes posterior mean as 'action'
x_hat[i_stim] = np.sum(x * posterior)
return x_hat
# Generate data for a single participant
true_stim = np.array([-8, -4, -3, -2.5, -2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2,
2.5, 3, 4, 8])
behaviour = generate_data(true_stim, 0.10)
plot_simulated_behavior(true_stim, behaviour)
```
---
# Section 7: Model fitting
```
# @title Video 7: Log likelihood
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Yf4y1R7ST", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="jbYauFpyZhs", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Now that we have generated some data, we will attempt to recover the parameter $p_{independent}$ that was used to generate it.
We have provided you with an incomplete function called `my_Bayes_model_mse()` that needs to be completed to perform the same computations you have performed in the previous exercises but over all the participant's trial, as opposed to a single trial.
The likelihood has already been constructed; since it depends only on the hypothetical stimuli, it will not change. However, we will have to implement the prior matrix, since it depends on $p_{independent}$. We will therefore have to recompute the posterior, input and the marginal in order to get $p(\hat{x} \mid x)$.
Using $p(\hat{x} \mid x)$, we will then compute the negative log-likelihood for each trial and find the value of $p_{independent}$ that minimizes the negative log-likelihood (i.e. maximises the log-likelihood. See the model fitting tutorial from W1D3 for a refresher).
In this experiment, we assume that trials are independent from one another. This is a common assumption--and it's often even true! It allows us to define negative log-likelihood as:
\begin{eqnarray}
-LL = - \sum_i \log p(\hat{x}_i \mid x_i)
\end{eqnarray}
where $\hat{x}_i$ is the participant's response for trial $i$, with presented stimulus $x_i$
* Complete the function `my_Bayes_model_mse`, we've already pre-completed the function to give you the prior, posterior, and input arrays on each trial
* Compute the marginalization array as well as the marginal on each trial
* Compute the negative log likelihood using the marginal and the participant's response
* Using the code snippet commented out in your script to loop over possible values of $p_{independent}$
### Exercise 7: Fitting a model to generated data
```
def my_Bayes_model_mse(params):
"""
Function fits the Bayesian model from Tutorial 4
Args :
params (list of positive floats): parameters used by the model
(params[0] = posterior scaling)
Returns :
(scalar) negative log-likelihood :sum of log probabilities
"""
# Create the prior array
p_independent=params[0]
prior_array = calculate_prior_array(x,
hypothetical_stim,
p_independent,
prior_sigma_indep= 3.)
# Create posterior array
posterior_array = calculate_posterior_array(prior_array, likelihood_array)
# Create Binary decision array
binary_decision_array = calculate_binary_decision_array(x, posterior_array)
# we will use trial_ll (trial log likelihood) to register each trial
trial_ll = np.zeros_like(true_stim)
# Loop over stimuli
for i_stim in range(len(true_stim)):
# create the input array with true_stim as mean
input_array = np.zeros_like(posterior_array)
for i in range(len(x)):
input_array[:, i] = my_gaussian(hypothetical_stim, true_stim[i_stim], 1)
input_array[:, i] = input_array[:, i] / np.sum(input_array[:, i])
# calculate the marginalizations
marginalization_array, marginal = my_marginalization(input_array,
binary_decision_array)
action = behaviour[i_stim]
idx = np.argmin(np.abs(x - action))
########################################################################
## Insert your code here to:
## - Compute the log likelihood of the participant
## remove the raise below to test your function
raise NotImplementedError("You need to complete the function!")
########################################################################
# Get the marginal likelihood corresponding to the action
marginal_nonzero = ... + np.finfo(float).eps # avoid log(0)
trial_ll[i_stim] = np.log(marginal_nonzero)
neg_ll = - trial_ll.sum()
return neg_ll
# Uncomment following lines, once the task is complete.
# plot_my_bayes_model(my_Bayes_model_mse)
# to_remove solution
def my_Bayes_model_mse(params):
"""
Function fits the Bayesian model from Tutorial 4
Args :
params (list of positive floats): parameters used by the model
(params[0] = posterior scaling)
Returns :
(scalar) negative log-likelihood :sum of log probabilities
"""
# Create the prior array
p_independent=params[0]
prior_array = calculate_prior_array(x,
hypothetical_stim,
p_independent,
prior_sigma_indep= 3.)
# Create posterior array
posterior_array = calculate_posterior_array(prior_array, likelihood_array)
# Create Binary decision array
binary_decision_array = calculate_binary_decision_array(x, posterior_array)
# we will use trial_ll (trial log likelihood) to register each trial
trial_ll = np.zeros_like(true_stim)
# Loop over stimuli
for i_stim in range(len(true_stim)):
# create the input array with true_stim as mean
input_array = np.zeros_like(posterior_array)
for i in range(len(x)):
input_array[:, i] = my_gaussian(hypothetical_stim, true_stim[i_stim], 1)
input_array[:, i] = input_array[:, i] / np.sum(input_array[:, i])
# calculate the marginalizations
marginalization_array, marginal = my_marginalization(input_array,
binary_decision_array)
action = behaviour[i_stim]
idx = np.argmin(np.abs(x - action))
# Get the marginal likelihood corresponding to the action
marginal_nonzero = marginal[idx] + np.finfo(float).eps # avoid log(0)
trial_ll[i_stim] = np.log(marginal_nonzero)
neg_ll = - trial_ll.sum()
return neg_ll
with plt.xkcd():
plot_my_bayes_model(my_Bayes_model_mse)
```
# Section 8: Summary
```
# @title Video 8: Outro
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Hz411v7hJ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="F5JfqJonz20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Congratuations! You found $p_{independent}$, the parameter that describes how much weight subjects assign to the same-cause vs. independent-cause origins of a sound. In the preceeding notebooks, we went through the entire Bayesian analysis pipeline:
* developing a model
* simulating data, and
* using Bayes' Rule and marginalization to recover a hidden parameter from the data
This example was simple, but the same princples can be used to analyze datasets with many hidden variables and complex priors and likelihoods. Bayes' Rule will also play a cruical role in many of the other techniques you will see later this week.
---
If you're still intrigued as to why we decided to use the mean of the posterior as a decision rule for a response $\hat{x}$, we have an extra Bonus Tutorial 4 which goes through the most common decision rules and how these rules correspond to minimizing different cost functions.
| github_jupyter |
This material should help you get the ideas clearer from the first meeting:
```
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
# now in a dict:
data={'name':names, 'age':ages, 'girl':woman,'born In':country, 'degree':education}
#now into a DF
import pandas as pd
friends=pd.DataFrame.from_dict(data)
# seeing it:
friends
```
The result is what you expected, but you need to be sure of what data structure you have:
```
#what is it?
type(friends)
#this is good
friends.age
#what is it?
type(friends.age)
#this is good
friends['age']
#what is it?
type(friends['age'])
#this is bad
friends.iloc[['age']]
#this is bad
friends.loc[['age']]
#this is bad
friends['age','born In']
#this is good
friends[['age','born In']]
# what is it?
type(friends[['age','born In']])
#this is bad
friends.'born In'
#this is good
friends.loc[:,['age','born In']]
type(friends.loc[:,['age','born In']])
#this is bad
friends.loc[:,['age':'born In']]
#this is bad
friends.iloc[:,['age','born In']]
# this is good (but different)
friends.iloc[:,1:4]
# what is it?
type(friends.iloc[:,1:4])
# this is good
friends.iloc[:,[1,3]]
#what is it?
type(friends.iloc[:,[1,3]])
```
```
friends[friends.age>30]
```
Some people like coding with the filter language:
```
#
filter1=friends.age>30
friends[filter1]
friends.where(filter1)
filter1a='age>30'
friends.query(filter1a)
isinstance(friends[filter1], pd.DataFrame), \
isinstance(friends.where(filter1), pd.DataFrame), \
isinstance(friends.query(filter1a), pd.DataFrame)
```
When you have Boolean values (True/False) you can simplify:
```
#from:
friends[friends.girl==False]
# to...
friends[~friends.girl]
```
You can have two filters:
```
# this will not work
friends[~friends.girl & friends.degree=='Bach']
# this will (with parentheses)
friends[(~friends.girl) & (friends.degree=='Bach')]
```
Other times you want a values once a filter was applied:
```
# youngest male:
friends[(~friends.girl) & (friends.age.min())] # this is wrong!
friends[(~friends.girl) & (friends.age==friends.age.min())] # this is wrong too!
friends.age.min()
```
You got empty answer because there is no man aged 27.
```
# this is correct
friends[~friends.girl].age.min()
```
Once you know the right age, you have to put it in the right place:
```
friends[friends.age==friends[~friends.girl].age.min()]
# or
friends.where(friends.age==friends[~friends.girl].age.min())
# or
friends.where(friends.age==friends[~friends.girl].age.min()).dropna()
```
The problem is that 'friends' are not subset and the age keeps being that of the youngest woman:
```
# bad:
friends.where(~friends.girl).where(friends.age==friends.age.min())
```
That's the advantage of **query**:
```
friends.query('~girl').query('age==age.min()')
#but
students=friends.copy()
students.where(~students.girl,inplace=True) #real subset
students.where(students.age==students.age.min())
```
Let's vary the data a little:
```
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,28,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
# now in a dict:
data={'name':names, 'age':ages, 'girl':woman,'born In':country, 'degree':education}
#now into a DF
import pandas as pd
friends2=pd.DataFrame.from_dict(data)
# seeing it:
friends2
```
There is a girl with the same age as the youngest boy, then:
```
friends2.where(friends2.age==friends2[~friends2.girl].age.min()).dropna()
```
We need a previous strategy:
```
# bad implementation:
friends2.where(friends2.age==friends2[~friends2.girl].age.min() & friends2.girl==False).dropna()
# bad implementation:
friends2.where(friends2.age==friends2[~friends2.girl].age.min() & ~friends2.girl).dropna()
# just parentheses to make it work!
friends2.where((friends2.age==friends2[~friends2.girl].age.min()) & (~friends2.girl)).dropna()
```
This one still works!
```
friends2.query('~girl').query('age==age.min()')
students2=friends2.copy()
students2.where(~students2.girl,inplace=True) #real subset
students2.where(students2.age==students2.age.min()).dropna()
```
| github_jupyter |
# Graphing network packets
This notebook currently relies on HoloViews 1.9 or above. Run `conda install -c ioam/label/dev holoviews` to install it.
## Preparing data
The data source comes from a publicly available network forensics repository: http://www.netresec.com/?page=PcapFiles. The selected file is https://download.netresec.com/pcap/maccdc-2012/maccdc2012_00000.pcap.gz.
```
tcpdump -qns 0 -r maccdc2012_00000.pcap | grep tcp > maccdc2012_00000.txt
```
For example, here is a snapshot of the resulting output:
```
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.24.100.1038 > 192.168.202.68.8080: tcp 0
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.27.100.37877 > 192.168.204.45.41936: tcp 0
09:30:07.780000 IP 192.168.24.100.1038 > 192.168.202.68.8080: tcp 0
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
09:30:07.780000 IP 192.168.202.68.8080 > 192.168.24.100.1038: tcp 1380
```
Given the directional nature of network traffic and the numerous ports per node, we will simplify the graph by treating traffic between nodes as undirected and ignorning the distinction between ports. The graph edges will have weights represented by the total number of bytes across both nodes in either direction.
```
python pcap_to_parquet.py maccdc2012_00000.txt
```
The resulting output will be two Parquet dataframes, `maccdc2012_nodes.parq` and `maccdc2012_edges.parq`.
## Loading data
```
import holoviews as hv
import networkx as nx
import dask.dataframe as dd
from holoviews.operation.datashader import (
datashade, dynspread, directly_connect_edges, bundle_graph, stack
)
from holoviews.element.graphs import layout_nodes
from datashader.layout import random_layout
from colorcet import fire
hv.extension('bokeh')
%opts RGB Graph Nodes [bgcolor='black' width=800 height=800 xaxis=None yaxis=None]
edges_df = dd.read_parquet('../data/maccdc2012_full_edges.parq').compute()
edges_df = edges_df.reset_index(drop=True)
graph = hv.Graph(edges_df)
len(edges_df)
```
## Edge bundling & layouts
Datashader and HoloViews provide support for a number of different graph layouts including circular, force atlas and random layouts. Since large graphs with thousands of edges can become quite messy when plotted datashader also provides functionality to bundle the edges.
#### Circular layout
By default the HoloViews Graph object lays out nodes using a circular layout. Once we have declared the ``Graph`` object we can simply apply the ``bundle_graph`` operation. We also overlay the datashaded graph with the nodes, letting us identify each node by hovering.
```
%%opts Nodes (size=5)
circular = bundle_graph(graph)
pad = dict(x=(-1.2, 1.2), y=(-1.2, 1.2))
datashade(circular, width=800, height=800) * circular.nodes.redim.range(**pad)
```
#### Force Atlas 2 layout
For other graph layouts you can use the ``layout_nodes`` operation supplying the datashader or NetworkX layout function. Here we will use the ``nx.spring_layout`` function based on the [Fruchterman-Reingold](https://en.wikipedia.org/wiki/Force-directed_graph_drawing) algorithm. Instead of bundling the edges we may also use the directly_connect_edges function:
```
%%opts Nodes (size=5)
forceatlas = directly_connect_edges(layout_nodes(graph, layout=nx.spring_layout))
pad = dict(x=(-.5, 1.3), y=(-.5, 1.3))
datashade(forceatlas, width=800, height=800) * forceatlas.nodes.redim.range(**pad)
```
#### Random layout
Datashader also provides a number of layout functions in case you don't want to depend on NetworkX:
```
%%opts Nodes (size=5)
random = bundle_graph(layout_nodes(graph, layout=random_layout))
pad = dict(x=(-.05, 1.05), y=(-0.05, 1.05))
datashade(random, width=800, height=800) * random.nodes.redim.range(**pad)
```
## Showing nodes with active traffic
To select just nodes with active traffic we will split the dataframe of bundled paths and then apply ``select`` on the new Graph to select just those edges with a weight of more than 10,000. By overlaying the sub-graph of high traffic edges we can take advantage of the interactive hover and tap features that bokeh provides while still revealing the full datashaded graph in the background.
```
%%opts Graph (edge_line_color='white' edge_hover_line_color='blue')
pad = dict(x=(-1.2, 1.2), y=(-1.2, 1.2))
datashade(circular, width=800, height=800) * circular.select(weight=(10000, None)).redim.range(**pad)
```
## Highlight TCP and UDP traffic
Using the same selection features we can highlight TCP and UDP connections separately again by overlaying it on top of the full datashaded graph. The edges can be revealed over the highlighted nodes and by setting an alpha level we can also reveal connections with both TCP (blue) and UDP (red) connections in purple.
```
%%opts Graph (edge_alpha=0 edge_hover_alpha=0.5 edge_nonselection_alpha=0 node_size=8 node_alpha=0.5) [color_index='weight' inspection_policy='edges']
udp_style = dict(edge_hover_line_color='red', node_hover_size=20, node_fill_color='red', edge_selection_line_color='red')
tcp_style = dict(edge_hover_line_color='blue', node_fill_color='blue', edge_selection_line_color='blue')
udp = forceatlas.select(protocol='udp', weight=(10000, None)).opts(style=udp_style)
tcp = forceatlas.select(protocol='icmp', weight=(10000, None)).opts(style=tcp_style)
datashade(forceatlas, width=800, height=800, normalization='log', cmap=['black', 'white']) * tcp * udp
```
## Coloring by protocol
As we have already seen we can easily apply selection to the ``Graph`` objects. We can use this functionality to select by protocol, datashade the subgraph for each protocol and assign each a different color and finally stack the resulting datashaded layers:
```
from bokeh.palettes import Blues9, Reds9, Greens9
ranges = dict(x_range=(-.5, 1.6), y_range=(-.5, 1.6), width=800, height=800)
protocols = [('tcp', Blues9), ('udp', Reds9), ('icmp', Greens9)]
shaded = hv.Overlay([datashade(forceatlas.select(protocol=p), cmap=cmap, **ranges)
for p, cmap in protocols]).collate()
stack(shaded * dynspread(datashade(forceatlas.nodes, cmap=['white'], **ranges)), link_inputs=True)
```
## Selecting the highest targets
With a bit of help from pandas we can also extract the twenty most targetted nodes and overlay them on top of the datashaded plot:
```
%%opts RGB [width=800 height=800] Nodes (size=8)
target_counts = list(edges_df.groupby('target').count().sort_values('weight').iloc[-20:].index.values)
(datashade(forceatlas, cmap=fire[128:]) * datashade(forceatlas.nodes, cmap=['cyan']) *
forceatlas.nodes.select(index=target_counts))
```
| github_jupyter |
# Tutorial 1: Part 2
Objectives:
- Learn how to define a simple lattice and compute the TWISS functions using MAD-X.
- Thick vs thin lens approximation TWISS comparison for a lattice with only quadrupoles.
- Tune and $\beta$-function dependence on K1.
**My first accelerator: a FODO cell**
1. Make a simple lattice FODO cell with:
- $L_{cell}$= 100 m.
- Focusing and defocusing quadrupoles of 5 m long ($L_{quad}$).
- Put the start of the first qudrupole at the start of the sequence.
- Each quadrupole has a focal length f = 200 m (HINT: k1 x $L_{quad}$ = 1/f).
2. Define a proton beam at $E_{tot}$= 2 GeV. Activate the sequence and try to find the periodic solution and plot the $\beta$-functions. If you found $\beta_{max}$= 460 m you succeded.
3. Using the plot you obtained can you estimate the phase advance of the cell. Compare with the tunes obtained with the TWISS.
**Matching the FODO cell using a parametric plot**
4. Try to twiss it powering the quadrupoles to obtain a $\Delta \mu \approx 90^\circ$ in the cell using the thin lens approximation (HINT: using the figures from Tutorial 1: Part 1). What is the actual phase advance computed by MAD-X?
**BONUS:**
5. What is the $\beta_{max}$? Compare with the thin lens approximation (using the figures from Tutorial 1: Part 1).
6. Halve the focusing strength of the quadrupoles, what is the effect of it on the $\beta_{max}$ and $\beta_{min}$ and on the $\Delta \mu$? Compare with the parametric plots.
7. Compute the maximum beam size $\sigma$ assuming a normalized emittance of 3 mrad mm and $E_{tot}= 7 TeV$.
8. Try with $E_{tot}$ = 0.7 GeV: what is the MAD-X error message?
9. Try with f= 20 m: what is the MAD-x error message?
```
#Import the needed libraries
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from cpymad.madx import Madx
```
# Launching MAD-X
```
myMad = Madx(stdout=True)
```
1. Make a simple lattice FODO cell with:
- $L_{cell}$= 100 m.
- Focusing and defocusing qudrupoles of 5 m long ($L_{quad}$).
- Put the start of the first qudrupole at the start of the sequence.
- Each quadrupole has a focal length f = 200 m (HINT: k1 x $L_{quad}$ = 1/f).
2. Define a proton beam at $E_{tot}$= 2 GeV. Activate the sequence and try to find the periodic solution and plot the $\beta$-functions. If you found $\beta_{max}$= 460 m you succeded.
<div>
<img src="attachment:Imagen%201-3.png" width="400"/>
</div>
```
#myMad = Madx(stdout=True)
myMad = Madx()
myString='''
! *********************************************************************
! Definition of parameters
! *********************************************************************
l_cell=100;
quadrupoleLenght=5;
f=200;
myK:=1/f/quadrupoleLenght;// m^-2
! *********************************************************************
! Definition of magnets
! *********************************************************************
QF: quadrupole, L=quadrupoleLenght, K1:=myK;
QD: quadrupole, L=quadrupoleLenght, K1:=-myK;
! *********************************************************************
! Definition of sequence
! *********************************************************************
myCell:sequence, refer=entry, L=L_CELL;
quadrupole1: QF, at=0;
marker1: marker, at=25;
quadrupole2: QD, at=50;
marker2: marker, at=75;
endsequence;
! *********************************************************************
! Definition of beam
! *********************************************************************
beam, particle=proton, energy=2;
! *********************************************************************
! Use of the sequence
! *********************************************************************
use, sequence=myCell;
! *********************************************************************
! TWISS
! *********************************************************************
title, 'My first twiss';
twiss, file=MyfirstFODO.madx;
plot, haxis=s, vaxis=betx,bety,dx,colour=100, title="test",file=MyfirstFODO;
value myK;
'''
myMad.input(myString);
########################
# Using MAD-X commands #
########################
myString='''
value, table(SUMM,Q1);
value, table(SUMM,betymax);
'''
myMad.input(myString);
#########################
# Using python commands #
#########################
# SUMM table
myDF=myMad.table.summ.dframe()
myDF
myDF["q1"]
#########################
# Using python commands #
#########################
# TWISS table
myDF=myMad.table.twiss.dframe()
myDF[['name','s','betx','bety','alfx','alfy','mux','muy']]
%matplotlib notebook
plt.rcParams['savefig.dpi'] = 80
plt.rcParams['figure.dpi'] = 80
# Plot
plt.plot(myDF['s'],myDF['betx'],'.-b',label='$\\beta_x$')
plt.plot(myDF['s'],myDF['bety'],'.-r',label='$\\beta_y$')
plt.legend()
plt.grid()
plt.xlabel('s [m]')
plt.ylabel('[m]')
plt.title('My first FODO cell')
```
**If you found $\beta_{max}$= 463.6 m you succeded!**
# Tune from the plot
3. Using the plot you obtained can you estimate the phase advance of the cell. Compare with the tunes obtained with the TWISS.
For the phase advance one can consider the definition
\begin{equation}
\mu=\int\frac{1}{\beta(s)}ds.
\end{equation}
Remember that the unit of phase in MAD-X is [2$\pi$].
```
# A very basic approximation considering a constant beta
# The mean beta value is
1/417.*100/2/np.pi
# Computing the integral
np.trapz(1/myDF['betx'],myDF['s'])/2/np.pi
# Correct values from MAD-X TWISS commands
myDF.iloc[-1]['mux']
# Phase Advance in units of degrees
myDF.iloc[-1]['mux']*2*180
```
# **Matching the FODO cell using a parametric plot**
4. Try to twiss it powering the quadrupoles to obtain a $\Delta \mu \approx 90^\circ$ in the cell using the thin lens approximation (using the figures from Tutorial 1: Part 1). What is the actual phase advance computed by MAD-X?
```
myMad = Madx(stdout=True)
```
<div>
<img src="attachment:test.png" width="600"/>
</div>
```
# From the plot from Tutoria 1 - part1, for 90 degrees phase advance k*Lcell*lq=2.8
quadrupoleLenght=5
cellLength=100
myK=2.8/cellLength/quadrupoleLenght
print(myK)
```
In comparison to the previous case as we want to increase the phase advance we need to increase the k if we keep constant the length of the cell and of the quadrupole. We move towards the right on the parmetric plot of Tutorial 1 - Part 1.
```
myString='''
! *********************************************************************
! Definition of parameters
! *********************************************************************
l_cell=100;
quadrupoleLenght=5;
myK:=2.8/l_cell/quadrupoleLenght;// m^-2
! *********************************************************************
! Definition of magnets
! *********************************************************************
QF: quadrupole, L=quadrupoleLenght, K1:=myK;
QD: quadrupole, L=quadrupoleLenght, K1:=-myK;
! *********************************************************************
! Definition of sequence
! *********************************************************************
myCell:sequence, refer=entry, L=L_CELL;
quadrupole1: QF, at=0;
marker1: marker, at=25;
quadrupole2: QD, at=50;
marker2: marker, at=75;
endsequence;
! *********************************************************************
! Definition of beam
! *********************************************************************
beam, particle=proton, energy=2;
! *********************************************************************
! Use of the sequence
! *********************************************************************
use, sequence=myCell;
! *********************************************************************
! TWISS
! *********************************************************************
title, 'My first twiss';
twiss, file=MyfirstFODO.madx;
plot, haxis=s, vaxis=betx,bety,dx,colour=100, title="test",file=MyfirstFODO;
value myK;
'''
myMad.input(myString);
myDFTable=myMad.table.twiss.dframe()
myDFTable[["name", "keyword","betx","bety","alfx","alfy", "mux", "muy" ]]
myString='''
value, table(SUMM,Q1);
value, table(SUMM,Q2);
'''
myMad.input(myString);
#Phase advance computed by MADX in rad
0.236*2*np.pi
#Phase advance computed by MADX in degrees
1.4828317324943823*180/np.pi
```
**BONUS:**
5. What is the $\beta_{max}$? Compare with the thin lens approximation (using the figures from Tutorial 1: Part 1).
```
# From the MAD-X calculation
myDFTable['betx'].max()
myDFTable['bety'].max()
#From the parametric plot Figure 1
#K1*Lcell*Lq=
0.0056*100*5
```
<div>
<img src="attachment:test2.png" width="600"/>
</div>
```
#From the parametric plot Figure 2 to be compared with 160.6036545763343 m (with MAD-X)
1.697*100
```
6. Halve the focusing strength of the quadrupoles, what is the effect of it on the $\beta_{max}$ and $\beta_{min}$ and on the $\Delta \mu$? Compare with the parametric plots.
```
myString='''
cellLength=100;
quadrupoleLenght=5;
myK=1.4/cellLength/quadrupoleLenght;// m^-2
twiss, file=firstTwiss.txt;
'''
myMad.input(myString);
myString='''
value, table(SUMM,Q1);
value, table(SUMM,Q2);
value, table(SUMM,betxmax);
value, table(SUMM,betymax);
'''
myMad.input(myString);
myDFTable=myMad.table.twiss.dframe()
myDFTable
```
If we reduce the k the bmax increases and therfore the beam size.
```
# If compared with the thin lens approximtion
# From the plot
# K1*Lcell*Lq
0.0028*100*5
#Max. betax
2.042*100
# Value from MADX
bmax=np.max(myDFTable["betx"])
bmax
```
Better agreement is observed as we move to the left on the parametric plot (smaller K or smaller Lq for a fixed value of the cell length) as the thin lens approximation condition is better satisfied.
7. Compute the maximum beam size $\sigma$ assuming a normalized emittance of 3 mrad mm and $E_{tot}= 7 TeV$.
One has to remember
\begin{equation} \sigma=\sqrt{\frac{\beta \epsilon_n}{ \gamma_r}} \end{equation}
```
import numpy as np
emittance_n=3e-6 #m*rad
beta_gamma=7000/.938 # this is an approximation
np.sqrt(myDFTable['betx'].max()*emittance_n/beta_gamma)
```
# Varying the energy
8. Try with $E_{tot}$ = 0.7 GeV: what is the MAD-X error message?
```
# with this simple wrapper in case of error the code enter in an infinite loop you have to stop manually.
myString='''
beam, particle=proton, energy=0.7;
title, 'My third twiss';
twiss;
'''
myMad.input(myString);
```
There is an error due to the fact that the total energy is lower than the rest proton mass.
# With f=20 m
9. Try with f= 20 m: what is the MAD-x error message?
```
myMad = Madx(stdout=True)
myString='''
! *********************************************************************
! Definition of parameters
! *********************************************************************
l_cell=100;
quadrupoleLenght=5;
f=20;
myK:=1/f/quadrupoleLenght;// m^-2
value myK;
! *********************************************************************
! Definition of magnet
! *********************************************************************
QF: quadrupole, L=quadrupoleLenght, K1:=myK;
QD: quadrupole, L=quadrupoleLenght, K1:=-myK;
! *********************************************************************
! Definition of sequence
! *********************************************************************
myCell:sequence, refer=entry, L=L_CELL;
quadrupole1: QF, at=0;
marker1: marker, at=25;
quadrupole2: QD, at=50;
endsequence;
! *********************************************************************
! Definition of beam
! *********************************************************************
beam, particle=proton, energy=2;
! *********************************************************************
! Use of the sequence
! *********************************************************************
use, sequence=myCell;
! *********************************************************************
! TWISS
! *********************************************************************
select, flag=twiss, clear;
select, flag=twiss, column=name, keyword, s, betx, alfx, mux, bety, alfy, muy, x, px, y, py, dx, dy, dx, dpx, dy, dpy;
twiss, file=f20.txt;
'''
myMad.input(myString);
```
**INTERPRETATION**: The cell is unstable due to the fact that the focal length is too short. Please note the value of the cosmux and cosmuy. **REMEMBER** |Trace(M)|< 2; -1 <= cos $\mu$ <= 1
# EXTRA
# Adding markers
This is an example to add markers in the sequence using a macros.
```
myMad=Madx(stdout=True)
myString='''
! *********************************************************************
! Definition of parameters
! *********************************************************************
option, echo=false, info=false, warn=false;
l_cell=100;
quadrupoleLenght=5;
f=200;
myK:=1/f/quadrupoleLenght;// m^-2
! *********************************************************************
! Definition of magnet
! *********************************************************************
QF: quadrupole, L=quadrupoleLenght, K1:=myK;
QD: quadrupole, L=quadrupoleLenght, K1:=-myK;
installMarkers(nn): macro ={
markernn: marker, at=nn;
!value,f;
};
N=6;
! *********************************************************************
! Definition of sequence
! *********************************************************************
myCell:sequence, REFER=centre, L=L_CELL;
quadrupole1: QF, at=2.5;
while (N<50) {
exec, installMarkers($N);
N=N+1;
}
quadrupole2: QD, at=52.5;
N=56;
while (N<100) {
exec, installMarkers($N);
N=N+1;
}
endsequence;
! *********************************************************************
! Definition of beam
! *********************************************************************
beam, particle=proton, energy=2;
! *********************************************************************
! Use of the sequence
! *********************************************************************
use, sequence=myCell;
! *********************************************************************
! TWISS
! *********************************************************************
select, flag=twiss, clear;
select, flag=twiss, column=name, keyword, L, s, betx, alfx, mux, bety, alfy, muy, x, px, y, py, dx, dy, dx, dpx, dy, dpy;
title, 'My fourth twiss';
twiss, file=WithMarkers.txt;
'''
myMad.input(myString);
myDF=myMad.table.twiss.dframe()
myDF.head()
%matplotlib notebook
plt.plot(myDF['s'],myDF['betx'],'.-b',label='$\\beta_x$')
plt.plot(myDF['s'],myDF['bety'],'.-r',label='$\\beta_y$')
plt.xlabel('s [m]')
plt.ylabel('[m]')
plt.legend(loc='best')
plt.grid()
np.trapz(1/myDF['betx'],myDF['s'])/2/np.pi
# Value from MADX
bmax=np.max(myDF["betx"])
bmax
```
# Thin lens approximation
```
myString='''
select, flag=makethin, class=quadrupole, slice=5;
makethin,sequence=myCell;
use,sequence=myCell;
twiss,sequence=myCell,file=thin.txt;
'''
myMad.input(myString);
myDF_thin=myMad.table.twiss.dframe()
myDF_thin
%matplotlib notebook
plt.plot(myDF_thin['s'],myDF_thin['betx'],'.-b',label='$\\beta_x$')
plt.plot(myDF_thin['s'],myDF_thin['bety'],'.-r',label='$\\beta_y$')
plt.xlabel('s [m]')
plt.ylabel('[m]')
plt.legend(loc='best')
plt.grid()
np.trapz(1/myDF_thin['betx'],myDF_thin['s'])/2/np.pi
mux=np.max(myDF_thin["mux"])
mux
# Value from MADX
bmax=np.max(myDF_thin["betx"])
bmax
```
| github_jupyter |
# QuickSort
- Based on Divide and Conquer Technique
- The array is divided into **Partitions Recursively**
- The technique to create the partitions is the **backbone** of this Algorithm
### QuickSort - What it is
In Short:
- 1. Define a Pivot element ( can be first element, last element or any random element from the array)
- 2. Find the correct position of the Pivot element in the array
- 3. This correct position of Pivot will partition the array in
Left partition (values less than Pivot) and
Right partition (values greater than Pivot) (w.r.t pivot)
- 4. Now take the Left Partition - Repeat steps 1-3
- 5. Take the Right Partition - Repeat steps 1-3
- 6. Repeat 4 and 5 until there is only one element left in the partition
<span style='color:red'>**Note:**</span> Steps 4 and 5 can be handled **recursively**
### Highlevel Steps
1. To create this partition, we choose a random element as **Pivot element** <br>
Ex, <span style='color:gray'>**| 10 | 15 | 1 | 2 | 9 | 16 | 11 |**</span> -- Choose **10** as Pivot Element<br>
2. Move elements **Less than the Pivot Element** to **Left of the Pivot Element**. This makes up <span style='color:blue'>**Left Partition**</span> & <br>
<span style='color:gray'>**| 1 | 2 | 9 |**</span><br>
3. Move elements **Greater than the Pivot Element** to **Right of the Pivot Element**. This makes up <span style='color:blue'>**Right Partition**</span> <br>
<span style='color:gray'>**| 15 | 16 | 11 |**</span><br>
4. Equal Elements can go in either of the partitions <br>
After one complete iteration we get:
<span style='color:maroon'>**|Left Partition | Pivot | Right Partition|**</span> <br>
<span style='color:gray'>**| 1 | 2 | 9 | <span style='color:blue'><u>10</u></span> | 15 | 16 | 11 |**</span> <br>
We see that, in this process, the Pivot element **10** gets into its **correct sorted position**
5. Repeat steps-1 to 4 each for the Left and Right subpartitions, Continue until there is only one element left
### Steps to Create the Partitions
Performed for each subpartitions - At beginning whole array is one Partition
1. Select the Pivot Element - Say 10
2. Create a **Start Marker** that will start from left end and move right
3. Create a **End Marker** that will start from right end and move left
4. Move the **Start Marker** to right and compare that element with the Pivot element
-- Say 15 compare with 10<br>
Keep moving the **Start Marker** as long as the <span style='color:maroon'>element is **<=** Pivot Element</span> <br>
5. Move the **End Marker** to left and compare that element with the Pivot element
-- Say 11 compare with 10<br>
Keep moving the **End Marker** to left as long as the <span style='color:maroon'>element is **>** Pivot Element</span> <br>
6. When both the conditions of step-4 and step-5 are met --> **Swap** the elements at the start and end positions
7. After swapping continue with steps 4 and 5 from where the start and end stopped
8. When start marker and end marker crosses each other - **Swap** the Pivot element with the element at the End Marker
9. The **End Marker** thus bring the Pivot element to its correct position and this divides the array into two partitions<br>
At this time - we get <br>
a. The Pivot element placed at its correct position<br>
b. creation of left partition<br>
c. creation of right partition<br>
<b>[Left Partition] [Pivot] [Right Partition]</b>
10. We then take the left partition - Repeat steps - 1 to 9
11. Next we take the right partition - Repeat steps - 1 to 9
10. Steps 10 and 11 are **Recursive**
```
def create_partition(numlist, start, end):
"""
This
input: numlist - numlistition to work on
start - Left marker
end - Right Marker
"""
pivot = numlist[start]
pivot_ind = start
N = end
while(start < end):
while(numlist[start] <= pivot):
if start == N :
break
start += 1
while(numlist[end] > pivot):
if end == 0:
break
end -= 1
if start < end:
numlist[start], numlist[end] = numlist[end], numlist[start]
numlist[end], numlist[pivot_ind] = numlist[pivot_ind], numlist[end]
return end
# Note: Though we are using Recursion, No explicit Base case is Needed as we are Sorting InPlace
# and there is implicit basecase in the condition if start< end
# Implicit base case for exit is when start = end (when there is only one element left)
def quicksort(numlist, start, end):
if start < end:
loc = create_partition(numlist, start, end) # partition location
quicksort(numlist, start, loc - 1) # Left Partition
quicksort(numlist, loc+1, end) # Right Partition
print()
numlist = [10, 15, 1, 2, 9, 16, 11]
print("Original :",numlist)
quicksort(numlist, 0, len(numlist) - 1)
print("Sorted. :",numlist)
print()
numlist = [7 , 6, 10, 5, 9, 2, 1, 15, 7]
print("Original :",numlist)
quicksort(numlist, 0, len(numlist) - 1)
print("Sorted. :",numlist)
```
# Option2: Using For Loop instead of While loops
```
def quicksort2(arr, start, end):
if start <= end:
loc = create_partitions2(arr, start, end)
quicksort(arr, start, loc - 1)
quicksort(arr, loc + 1, end)
def create_partitions2(arr, start, end):
pivot = arr[start]
N = len(arr)
i = start - 1
for j in range(start, end + 1):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
arr[i], arr[start] = arr[start], arr[i]
return i
print()
numlist = [7 , 6, 10, 5, 9, 2, 1, 15, 7]
print("Original :",numlist)
quicksort2(numlist, 0, len(numlist) - 1)
print("Sorted. :",numlist)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Forecasting with a stateful RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c07_forecasting_with_stateful_rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c07_forecasting_with_stateful_rnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Setup
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
```
## Stateful RNN Forecasting
```
def sequential_window_dataset(series, window_size):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True)
ds = ds.flat_map(lambda window: window.batch(window_size + 1))
ds = ds.map(lambda window: (window[:-1], window[1:]))
return ds.batch(1).prefetch(1)
for X_batch, y_batch in sequential_window_dataset(tf.range(10), 3):
print(X_batch.numpy(), y_batch.numpy())
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = sequential_window_dataset(x_train, window_size)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True, stateful=True,
batch_input_shape=[1, None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 30))
reset_states = ResetStatesCallback()
optimizer = keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100,
callbacks=[lr_schedule, reset_states])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = sequential_window_dataset(x_train, window_size)
valid_set = sequential_window_dataset(x_valid, window_size)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True, stateful=True,
batch_input_shape=[1, None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
reset_states = ResetStatesCallback()
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint.h5", save_best_only=True)
early_stopping = keras.callbacks.EarlyStopping(patience=50)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint, reset_states])
model = keras.models.load_model("my_checkpoint.h5")
model.reset_states()
rnn_forecast = model.predict(series[np.newaxis, :, np.newaxis])
rnn_forecast = rnn_forecast[0, split_time - 1:-1, 0]
rnn_forecast.shape
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
| github_jupyter |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Notebook authors: Kevin P. Murphy (murphyk@gmail.com)
# and Mahmoud Soliman (mjs@aucegypt.edu)
# This notebook reproduces figures for chapter 15 from the book
# "Probabilistic Machine Learning: An Introduction"
# by Kevin Murphy (MIT Press, 2021).
# Book pdf is available from http://probml.ai
```
<a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
<a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter15_neural_networks_for_sequences_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Figure 15.1:<a name='15.1'></a> <a name='rnn'></a>
Recurrent neural network (RNN) for generating a variable length output sequence $\mathbf y _ 1:T $ given an optional fixed length input vector $\mathbf x $.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.1.png" width="256"/>
## Figure 15.2:<a name='15.2'></a> <a name='rnnTimeMachine'></a>
Example output of length 500 generated from a character level RNN when given the prefix ``the''. We use greedy decoding, in which the most likely character at each step is computed, and then fed back into the model. The model is trained on the book \em The Time Machine by H. G. Wells.
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks-d2l/rnn_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
## Figure 15.3:<a name='15.3'></a> <a name='imageCaptioning'></a>
Illustration of a CNN-RNN model for image captioning. The pink boxes labeled ``LSTM'' refer to a specific kind of RNN that we discuss in \cref sec:LSTM . The pink boxes labeled $W_ \text emb $ refer to embedding matrices for the (sampled) one-hot tokens, so that the input to the model is a real-valued vector. From https://bit.ly/2FKnqHm . Used with kind permission of Yunjey Choi.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.3.pdf" width="256"/>
## Figure 15.4:<a name='15.4'></a> <a name='rnnBiPool'></a>
(a) RNN for sequence classification. (b) Bi-directional RNN for sequence classification.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.4_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.4_B.png" width="256"/>
## Figure 15.5:<a name='15.5'></a> <a name='biRNN'></a>
(a) RNN for transforming a sequence to another, aligned sequence. (b) Bi-directional RNN for the same task.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.5_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.5_B.png" width="256"/>
## Figure 15.6:<a name='15.6'></a> <a name='deepRNN'></a>
Illustration of a deep RNN. Adapted from Figure 9.3.1 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.6.png" width="256"/>
## Figure 15.7:<a name='15.7'></a> <a name='seq2seq'></a>
Encoder-decoder RNN architecture for mapping sequence $\mathbf x _ 1:T $ to sequence $\mathbf y _ 1:T' $.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.7.png" width="256"/>
## Figure 15.8:<a name='15.8'></a> <a name='NMT'></a>
(a) Illustration of a seq2seq model for translating English to French. The - character represents the end of a sentence. From Figure 2.4 of <a href='#Luong2016thesis'>[Luo16]</a> . Used with kind permission of Minh-Thang Luong. (b) Illustration of greedy decoding. The most likely French word at each step is highlighted in green, and then fed in as input to the next step of the decoder. From Figure 2.5 of <a href='#Luong2016thesis'>[Luo16]</a> . Used with kind permission of Minh-Thang Luong.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.8_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.8_B.png" width="256"/>
## Figure 15.9:<a name='15.9'></a> <a name='BPTT'></a>
An RNN unrolled (vertically) for 3 time steps, with the target output sequence and loss node shown explicitly. From Figure 8.7.2 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.9.png" width="256"/>
## Figure 15.10:<a name='15.10'></a> <a name='GRU'></a>
Illustration of a GRU. Adapted from Figure 9.1.3 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.10.png" width="256"/>
## Figure 15.11:<a name='15.11'></a> <a name='LSTM'></a>
Illustration of an LSTM. Adapted from Figure 9.2.4 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.11.png" width="256"/>
## Figure 15.12:<a name='15.12'></a> <a name='stsProb'></a>
Conditional probabilities of generating each token at each step for two different sequences. From Figures 9.8.1--9.8.2 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.12_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.12_B.png" width="256"/>
## Figure 15.13:<a name='15.13'></a> <a name='beamSearch'></a>
Illustration of beam search using a beam of size $K=2$. The vocabulary is $\mathcal Y = \ A,B,C,D,E\ $, with size $V=5$. We assume the top 2 symbols at step 1 are A,C. At step 2, we evaluate $p(y_1=A,y_2=y)$ and $p(y_1=C,y_2=y)$ for each $y \in \mathcal Y $. This takes $O(K V)$ time. We then pick the top 2 partial paths, which are $(y_1=A,y_2=B)$ and $(y_1=C,y_2=E)$, and continue in the obvious way. Adapted from Figure 9.8.3 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.13.png" width="256"/>
## Figure 15.14:<a name='15.14'></a> <a name='textCNN'></a>
Illustration of the TextCNN model for binary sentiment classification. Adapted from Figure 15.3.5 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.14.png" width="256"/>
## Figure 15.15:<a name='15.15'></a> <a name='wavenet'></a>
Illustration of the wavenet model using dilated (atrous) convolutions, with dilation factors of 1, 2, 4 and 8. From Figure 3 of <a href='#wavenet'>[Aar+16]</a> . Used with kind permission of Aaron van den Oord.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.15.png" width="256"/>
## Figure 15.16:<a name='15.16'></a> <a name='attention'></a>
Attention computes a weighted average of a set of values, where the weights are derived by comparing the query vector to a set of keys. From Figure 10.3.1 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.16.pdf" width="256"/>
## Figure 15.17:<a name='15.17'></a> <a name='attenRegression'></a>
Kernel regression in 1d. (a) Kernel weight matrix. (b) Resulting predictions on a dense grid of test points.
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks/kernel_regression_attention.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.17_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.17_B.png" width="256"/>
## Figure 15.18:<a name='15.18'></a> <a name='seq2seqAttn'></a>
Illustration of seq2seq with attention for English to French translation. Used with kind permission of Minh-Thang Luong.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.18.png" width="256"/>
## Figure 15.19:<a name='15.19'></a> <a name='translationHeatmap'></a>
Illustration of the attention heatmaps generated while translating two sentences from Spanish to English. (a) Input is ``hace mucho frio aqui.'', output is ``it is very cold here.''. (b) Input is ``¿todavia estan en casa?'', output is ``are you still at home?''. Note that when generating the output token ``home'', the model should attend to the input token ``casa'', but in fact it seems to attend to the input token ``?''.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.19_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.19_B.png" width="256"/>
## Figure 15.20:<a name='15.20'></a> <a name='EHR'></a>
Example of an electronic health record. In this example, 24h after admission to the hospital, the RNN classifier predicts the risk of death as 19.9\%; the patient ultimately died 10 days after admission. The ``relevant'' keywords from the input clinical notes are shown in red, as identified by an attention mechanism. From Figure 3 of <a href='#Rajkomar2018'>[Alv+18]</a> . Used with kind permission of Alvin Rakomar.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.20.png" width="256"/>
## Figure 15.21:<a name='15.21'></a> <a name='SNLI'></a>
Illustration of sentence pair entailment classification using an MLP with attention to align the premise (``I do need sleep'') with the hypothesis (``I am tired''). White squares denote active attention weights, blue squares are inactive. (We are assuming hard 0/1 attention for simplicity.) From Figure 15.5.2 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.21.png" width="256"/>
## Figure 15.22:<a name='15.22'></a> <a name='showAttendTell'></a>
Image captioning using attention. (a) Soft attention. Generates ``a woman is throwing a frisbee in a park''. (b) Hard attention. Generates ``a man and a woman playing frisbee in a field''. From Figure 6 of <a href='#showAttendTell'>[Kel+15]</a> . Used with kind permission of Kelvin Xu.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.22_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.22_B.png" width="256"/>
## Figure 15.23:<a name='15.23'></a> <a name='transformerTranslation'></a>
Illustration of how encoder self-attention for the word ``it'' differs depending on the input context. From https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html . Used with kind permission of Jakob Uszkoreit.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.23.png" width="256"/>
## Figure 15.24:<a name='15.24'></a> <a name='multiHeadAttn'></a>
Multi-head attention. Adapted from Figure 9.3.3 of <a href='#dive'>[Zha+20]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.24.png" width="256"/>
## Figure 15.25:<a name='15.25'></a> <a name='positionalEncodingSinusoids'></a>
(a) Positional encoding matrix for a sequence of length $n=60$ and an embedding dimension of size $d=32$. (b) Basis functions for columsn 6 to 9.
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks-d2l/positional_encoding.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.25_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.25_B.png" width="256"/>
## Figure 15.26:<a name='15.26'></a> <a name='transformer'></a>
The transformer. From <a href='#Weng2018attention'>[Lil18]</a> . Used with kind permission of Lilian Weng. Adapted from Figures 1--2 of <a href='#Vaswani2017'>[Ash+17]</a> .
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.26.png" width="256"/>
## Figure 15.27:<a name='15.27'></a> <a name='attentionBakeoff'></a>
Comparison of (1d) CNNs, RNNs and self-attention models. From Figure 10.6.1 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.27.png" width="256"/>
## Figure 15.28:<a name='15.28'></a> <a name='VIT'></a>
The Vision Transformer (ViT) model. This treats an image as a set of input patches. The input is prepended with the special CLASS embedding vector (denoted by *) in location 0. The class label for the image is derived by applying softmax to the final ouput encoding at location 0. From Figure 1 of <a href='#ViT'>[Ale+21]</a> . Used with kind permission of Alexey Dosovitskiy
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.28.png" width="256"/>
## Figure 15.29:<a name='15.29'></a> <a name='transformers_taxonomy'></a>
Venn diagram presenting the taxonomy of different efficient transformer architectures. From <a href='#Tay2020transformers'>[Yi+20]</a> . Used with kind permission of Yi Tay.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.29.pdf" width="256"/>
## Figure 15.30:<a name='15.30'></a> <a name='rand_for_fast_atten'></a>
Attention matrix $\mathbf A $ rewritten as a product of two lower rank matrices $\mathbf Q ^ \prime $ and $(\mathbf K ^ \prime )^ \mkern -1.5mu\mathsf T $ with random feature maps $\boldsymbol \phi (\mathbf q _i) \in \mathbb R ^M$ and $\boldsymbol \phi (\mathbf v _k) \in \mathbb R ^M$ for the corresponding queries/keys stored in the rows/columns. Used with kind permission of Krzysztof Choromanski.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.30.png" width="256"/>
## Figure 15.31:<a name='15.31'></a> <a name='fatten'></a>
Decomposition of the attention matrix $\mathbf A $ can be leveraged to improve attention computations via matrix associativity property. To compute $\mathbf AV $, we first calculate $\mathbf G =(\mathbf k ^ \prime )^ \mkern -1.5mu\mathsf T \mathbf V $ and then $\mathbf q ^ \prime \mathbf G $, resulting in linear in $N$ space and time complexity. Used with kind permission of Krzysztof Choromanski.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.31.png" width="256"/>
## Figure 15.32:<a name='15.32'></a> <a name='elmo'></a>
Illustration of ELMo bidrectional language model. Here $y_t=x_ t+1 $ when acting as the target for the forwards LSTM, and $y_t = x_ t-1 $ for the backwards LSTM. (We add \text \em bos \xspace and \text \em eos \xspace sentinels to handle the edge cases.) From <a href='#Weng2019LM'>[Lil19]</a> . Used with kind permission of Lilian Weng.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.32.png" width="256"/>
## Figure 15.33:<a name='15.33'></a> <a name='GPT'></a>
Illustration of (a) BERT and (b) GPT. $E_t$ is the embedding vector for the input token at location $t$, and $T_t$ is the output target to be predicted. From Figure 3 of <a href='#bert'>[Jac+19]</a> . Used with kind permission of Ming-Wei Chang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.33_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.33_B.png" width="256"/>
## Figure 15.34:<a name='15.34'></a> <a name='bertEmbedding'></a>
Illustration of how a pair of input sequences, denoted A and B, are encoded before feeding to BERT. From Figure 14.8.2 of <a href='#dive'>[Zha+20]</a> . Used with kind permission of Aston Zhang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.34.png" width="256"/>
## Figure 15.35:<a name='15.35'></a> <a name='bert-tasks'></a>
Illustration of how BERT can be used for different kinds of supervised NLP tasks. (a) Single sentence classification (e.g., sentiment analysis); (b) Sentence-pair classification (e.g., textual entailment); (d) Single sentence tagging (e.g., shallow parsing); (d) Question answering. From Figure 4 of <a href='#bert'>[Jac+19]</a> . Used with kind permission of Ming-Wei Chang.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.35_A.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.35_B.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.35_C.png" width="256"/>
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.35_D.png" width="256"/>
## Figure 15.36:<a name='15.36'></a> <a name='T5'></a>
Illustration of how the T5 model (``Text-to-text Transfer Transformer'') can be used to perform multiple NLP tasks, such as translating English to German; determining if a sentence is linguistic valid or not ( \bf CoLA stands for ``Corpus of Linguistic Acceptability''); determining the degree of semantic similarity ( \bf STSB stands for ``Semantic Textual Similarity Benchmark''); and abstractive summarization. From Figure 1 of <a href='#T5'>[Col+19]</a> . Used with kind permission of Colin Raffel.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
```
<img src="https://github.com/probml/pml-book/raw/main/pml1/figures/Figure_15.36.png" width="256"/>
## References:
<a name='wavenet'>[Aar+16]</a> V. Aaron, D. Sander, Z. Heiga, S. Karen, V. Oriol, G. Alex, K. Nal, S. Andrew and K. Koray. "WaveNet: A Generative Model for Raw Audio". abs/1609.03499 (2016). arXiv: 1609.03499
<a name='ViT'>[Ale+21]</a> D. Alexey, B. Lucas, K. A. Dirk, Z. Xiaohua, U. T. Mostafa, M. Matthias, H. G. Sylvain, U. Jakob and H. Neil. "An Image is Worth 16x16 Words: Transformers for ImageRecognition at Scale". (2021).
<a name='Rajkomar2018'>[Alv+18]</a> R. Alvin, O. Eyal, C. Kai, D. A. Nissan, H. Michaela, L. PeterJ, L. LiuXiaobing, M. Jake, S. Mimi, S. Patrik, Y. Hector, Z. Kun, Z. Yi, F. Gerardo, D. GavinE, I. Jamie, L. Quoc, L. K. Alexander, T. Justin, W. De, W. James, W. Jimbo, L. Dana, V. L, C. Katherine, P. Michael, M. MadabushiSrinivasan, S. NigamH, B. AtulJ, H. D, C. Claire, C. GregS and D. Jeffrey. "Scalable and accurate deep learning with electronic healthrecords". In: NPJ Digit Med (2018).
<a name='Vaswani2017'>[Ash+17]</a> V. Ashish, S. Noam, P. Niki, U. Jakob, J. Llion, G. AidanN, K. KaiserLukasz and P. Illia. "Attention Is All You Need". (2017).
<a name='T5'>[Col+19]</a> R. Colin, S. Noam, R. Adam, L. LeeKatherine, N. Sharan, M. Michael, Z. ZhouYanqi, L. Wei and L. PeterJ. "Exploring the Limits of Transfer Learning with a UnifiedText-to-Text Transformer". abs/1910.10683 (2019). arXiv: 1910.10683
<a name='bert'>[Jac+19]</a> D. Jacob, C. Ming-Wei, L. Kenton and T. ToutanovaKristina. "BERT: Pre-training of Deep Bidirectional Transformers forLanguage Understanding". (2019).
<a name='showAttendTell'>[Kel+15]</a> X. Kelvin, B. JimmyLei, K. Ryan, C. K. Aaron, S. Ruslan, Z. S and B. Yoshua. "Show, Attend and Tell: Neural Image Caption Generation withVisual Attention". (2015).
<a name='Weng2018attention'>[Lil18]</a> W. Lilian "Attention? Attention!". In: lilianweng.github.io/lil-log (2018).
<a name='Weng2019LM'>[Lil19]</a> W. Lilian "Generalized Language Models". In: lilianweng.github.io/lil-log (2019).
<a name='Luong2016thesis'>[Luo16]</a> M. Luong "Neural machine translation". (2016).
<a name='Tay2020transformers'>[Yi+20]</a> T. Yi, D. Mostafa, B. Dara and M. MetzlerDonald. "Efficient Transformers: A Survey". abs/2009.06732 (2020). arXiv: 2009.06732
<a name='dive'>[Zha+20]</a> A. Zhang, Z. Lipton, M. Li and A. Smola. "Dive into deep learning". (2020).
| github_jupyter |
```
import pandas as pd
df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]},
index=['falcon', 'dog', 'spider', 'fish'])
from ipyaggrid import Grid
grid_options_1 = {
'enableSorting': 'false',
'enableFilter': 'false',
'enableColResize': 'false',
'enableRowSelection': 'true',
'rowSelection': 'multiple',
'rowsAfterGroup':'true' ,
}
buttons=[{'name':'Show selection in table', 'action':"""
var count = view.gridOptions.api.getDisplayedRowCount();
for (var i = 0; i<count; i++) {
if(i == view.model.get('user_params').selected_id){
var rowNode = view.gridOptions.api.getDisplayedRowAtIndex(i);
rowNode.setSelected(true, true);
view.gridOptions.api.ensureIndexVisible(view.model.get('user_params').selected_id,'top');
}
}"""}]
grid1 = Grid(grid_data=df,
grid_options=grid_options_1,
quick_filter=True,
export_csv=False,
menu = {'buttons':buttons},
export_excel=False,
#show_toggle_edit=True,
export_mode='auto',
index=True,
keep_multiindex=False,
theme='ag-theme-fresh',
user_params={'selected_id':0})
display(grid1)
grid1.user_params = {'selected_id': 2}
import os
import json
import pandas as pd
import numpy as np
import urllib.request as ur
from copy import deepcopy as copy
from ipyaggrid import Grid
url = 'https://raw.githubusercontent.com/bahamas10/css-color-names/master/css-color-names.json'
with ur.urlopen(url) as res:
cnames = json.loads(res.read().decode('utf-8'))
colors = []
for k in cnames.keys():
colors.append({'color':k, 'value':cnames[k]})
colors_ref = colors[:]
css_rules="""
.color-box{
float: left;
width: 10px;
height: 10px;
margin: 5px;
border: 1px solid rgba(0, 0, 0, .2);
}
"""
columnDefs = [
{'headerName': 'Color', 'field':'color',
'pinned': True, 'editable': True},
{'headerName': 'Code', 'field':'value', 'editable': False, 'cellRenderer': """
function(params){
return `<div><div style="background-color:${params.value}" class='color-box'></div><span>${params.value}</span></div>`
}"""}
]
gridOptions = {'columnDefs':columnDefs,
'enableFilter':'true',
'enableSorting':'true',
'rowSelection':'multiple',
}
color_grid = Grid(width=400,
height=250,
css_rules=css_rules,
grid_data=colors,
grid_options=gridOptions,
sync_on_edit=True,
sync_grid=True, #default
)
sync_grid=True
color_grid.get_grid()
color_grid.get_selected_rows()
color_grid
color_grid.grid_data_out.get('grid')
gridOptions = {'columnDefs':columnDefs,
'enableFilter':'true',
'enableColumnResize':'true',
'enableSorting':'true',
}
color_grid2 = Grid(width=500,
height=250,
css_rules=css_rules,
quick_filter=True,
show_toggle_edit=True,
grid_data=colors_ref,
grid_options=gridOptions)
color_grid2
colors = colors_ref[:]
colors.append({'color':'jupyterorange', 'value':'#f37626'})
color_grid2.update_grid_data(copy(colors)) # New data set corresponding to the good columns
color_grid2.delete_selected_rows()
color_grid2.grid_data_out['grid']
```
| github_jupyter |
# Lab 11 Download Census Data into Python
```
from urllib import request
import json
from pprint import pprint
census_api_key = 'f84452395038a4790772cc768cb13ecbe0e6a636' #get your key from https://api.census.gov/data/key_signup.html
url_str = 'https://api.census.gov/data/2019/acs/acs5?get=B01001_001E,NAME&for=county:*&in=state:51&key='+census_api_key # create the url of your census data
response = request.urlopen(url_str) # read the response into computer
html_str = response.read().decode("utf-8") # convert the response into string
if (html_str):
json_data = json.loads(html_str) # convert the string into json
print (json_data[0])
for v1,name,state,county in json_data[1:]:
print (v1,name,state,county )
```
## 3.1 County with highest population
```
url_str = 'https://api.census.gov/data/2019/acs/acs5?get=B01001_001E,NAME&for=county:*&in=state:51&key='+census_api_key # create the url of your census data
response = request.urlopen(url_str) # read the response into computer
html_str = response.read().decode("utf-8") # convert the response into string
max_p= 0
max_county=''
if (html_str):
json_data = json.loads(html_str) # convert the string into json
# print (json_data[0])
for v1,name,state,county in json_data[1:]:
if max_p< int(v1):
max_p= int(v1)
max_county = name
# print (v1,name,state,county )
print("{} has the highest population {}.".format(max_county, max_p))
```
# 3.2 county with highest male population
```
url_str = 'https://api.census.gov/data/2019/acs/acs5?get=B01001_002E,NAME&for=county:*&in=state:51&key='+census_api_key # create the url of your census data
response = request.urlopen(url_str) # read the response into computer
html_str = response.read().decode("utf-8") # convert the response into string
max_p= 0
max_county=''
if (html_str):
json_data = json.loads(html_str) # convert the string into json
# print (json_data[0])
for v1,name,state,county in json_data[1:]:
if max_p< int(v1):
max_p= int(v1)
max_county = name
# print (v1,name,state,county )
print("{} has the highest male population {}.".format(max_county, max_p))
```
# 3.3 County with highest male:total population ratio
```
url_str = 'https://api.census.gov/data/2019/acs/acs5?get=B01001_001E,B01001_002E,NAME&for=county:*&in=state:51&key='+census_api_key # create the url of your census data
response = request.urlopen(url_str) # read the response into computer
html_str = response.read().decode("utf-8") # convert the response into string
max_p= 0
max_county=''
if (html_str):
json_data = json.loads(html_str) # convert the string into json
# print (json_data[0])
for v1,v2,name,state,county in json_data[1:]:
if max_p< int(v2)/int(v1):
max_p= int(v2)/int(v1)
max_county = name
# print (v1,v2,name,state,county )
print("{} has the highest male/total population ratio {}.".format(max_county, max_p))
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science:
## Homework 2: Linear and k-NN Regression
**Harvard University**<br/>
**Fall 2019**<br/>
**Instructors**: Pavlos Protopapas, Kevin Rader, Chris Tanner
<hr style="height:2pt">
```
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
### INSTRUCTIONS
- To submit your assignment follow the instructions given in Canvas.
- Restart the kernel and run the whole notebook again before you submit.
- If you submit individually and you have worked with someone, please include the name of your [one] partner below.
- As much as possible, try and stick to the hints and functions we import at the top of the homework, as those are the ideas and tools the class supports and is aiming to teach. And if a problem specifies a particular library you're required to use that library, and possibly others from the import list.
- Please use .head() when viewing data. Do not submit a notebook that is excessively long because output was not suppressed or otherwise limited.
<hr style="height:2pt">
```
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from statsmodels.api import OLS
%matplotlib inline
```
# <div class="theme"> Predicting Taxi Pickups in NYC </div>
In this homework, we will explore k-nearest neighbor and linear regression methods for predicting a quantitative variable. Specifically, we will build regression models that can predict the number of taxi pickups in New York City at any given time of the day. These prediction models will be useful, for example, in monitoring traffic in the city.
The data set for this problem is given in the file `nyc_taxi.csv`. You will need to separate it into training and test sets. The first column contains the time of a day in minutes, and the second column contains the number of pickups observed at that time. The data set covers taxi pickups recorded in NYC during Jan 2015.
We will fit regression models that use the time of the day (in minutes) as a predictor and predict the average number of taxi pickups at that time. The models will be fitted to the training set and evaluated on the test set. The performance of the models will be evaluated using the $R^2$ metric.
## <div class="exercise"> <b> Question 1 [25 pts]</b> </div>
**1.1**. Use pandas to load the dataset from the csv file `nyc_taxi.csv` into a pandas data frame. Use the `train_test_split` method from `sklearn` with a `random_state` of 42 and a `test_size` of 0.2 to split the dataset into training and test sets. Store your train set data frame as `train_data` and your test set data frame as `test_data`.
**1.2**. Generate a scatter plot of the training data points with clear labels on the x and y axes to demonstrate how the number of taxi pickups is dependent on the time of the day. Always be sure to title your plot.
**1.3**. In a few sentences, describe the general pattern of taxi pickups over the course of the day and explain why this is a reasonable result.
**1.4**. You should see a *hole* in the scatter plot when `TimeMin` is 500-550 minutes and `PickupCount` is roughly 20-30 pickups. Briefly surmise why this is the case.
### Answers
**1.1 Use pandas to load the dataset from the csv file ...**
```
# read the file
# your code here
data = pd.read_csv('data/nyc_taxi.csv')
data.describe()
# split the data
# your code here
# Random_state makes sure same split each time this random process is run (takes out randomness)
train_data, test_data = train_test_split(data, test_size=0.2, random_state=42)
train_data.head()
# your code here
train_data.describe()
# Test size is indeed 20% of total
# your code here
print(test_data.shape)
test_data.describe()
```
**1.2 Generate a scatter plot of the training data points**
```
# Your code here
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(14,6))
axes[0].scatter(train_data['TimeMin'], train_data['PickupCount'])
axes[0].set_xlabel('Time of Day in Minutes')
axes[0].set_ylabel('# of Pickups')
# Hours might be a more readable format of displaying the x-axis; apply a scale transformation
axes[1].scatter(train_data['TimeMin']/60, train_data['PickupCount']);
axes[1].set_xlabel('Time of Day in Hours');
axes[1].set_ylabel('# of Pickups');
fig.suptitle("Taxi Pickups in NYC vs Time of Day");
```
**1.3 In a few sentences, describe the general pattern of taxi pickups over the course of the day and explain why this is a reasonable result.**
*your answer here*
The pattern of pickups seems to bear out the social patterns you'd expect in a major urban metropolis like New York. We see instances of very high pickup counts between midnight and 5 a.m. when people take cabs home as bars close (in a city that never sleeps as opposed to a quiet academic town like Boston). Then you see a linear trend of pickups starting at a low point in the early morning (just after 5 a.m.) during the beginning of the morning commute when you'd expect very little social going on and steadily increasing to the common social hours in the evening at night when you'd expect people to congregate for dinner, shows, concerts, etc. There does appear to be a mid-morning surge around 8am to 10:30am, perhaps as some people travel to work via taxi.
**1.4 You should see a hole in the scatter plot when TimeMin is 500-550 minutes and PickupCount is roughly 20-30 pickups.**
*your answer here*
Weekends and weekdays likely have different behavior for the number of taxi pickups over the course of the day, especially during morning rush hour (8-9 am) when many folks are still in bed or at home on the weekends.
## <div class="exercise"> <b>Question 2 [25 pts]</b> </div>
In lecture we've seen k-Nearest Neighbors (k-NN) Regression, a non-parametric regression technique. In the following problems please use built in functionality from `sklearn` to run k-NN Regression.
**2.1**. Choose `TimeMin` as your feature variable and `PickupCount` as your response variable. Create a dictionary of `KNeighborsRegressor` objects and call it `KNNModels`. Let the key for your `KNNmodels` dictionary be the value of $k$ and the value be the corresponding `KNeighborsRegressor` object. For $k \in \{1, 10, 75, 250, 500, 750, 1000\}$, fit k-NN regressor models on the training set (`train_data`).
**2.2**. For each $k$, overlay a scatter plot of the actual values of `PickupCount` vs. `TimeMin` in the training set with a scatter plot of **predictions** for `PickupCount` vs `TimeMin`. Do the same for the test set. You should have one figure with 7 x 2 total subplots; for each $k$ the figure should have two subplots, one subplot for the training set and one for the test set.
**Hints**:
1. Each subplot should use different color and/or markers to distinguish k-NN regression prediction values from the actual data values.
2. Each subplot must have appropriate axis labels, title, and legend.
3. The overall figure should have a title.
**2.3**. Report the $R^2$ score for the fitted models on both the training and test sets for each $k$ (reporting the values in tabular form is encouraged).
**2.4**. Plot, in a single figure, the $R^2$ values from the model on the training and test set as a function of $k$.
**Hints**:
1. Again, the figure must have axis labels and a legend.
2. Differentiate $R^2$ plots on the training and test set by color and/or marker.
3. Make sure the $k$ values are sorted before making your plot.
**2.5**. Discuss the results:
1. If $n$ is the number of observations in the training set, what can you say about a k-NN regression model that uses $k = n$?
2. What does an $R^2$ score of $0$ mean?
3. What would a negative $R^2$ score mean? Are any of the calculated $R^2$ you observe negative?
4. Do the training and test $R^2$ plots exhibit different trends? Describe.
5. What is the best value of $k$? How did you come to choose this value? How do the corresponding training/test set $R^2$ values compare?
6. Use the plots of the predictions (in 2.2) to justify why your choice of the best $k$ makes sense (**Hint**: think Goldilocks).
### Answers
**2.1 Choose `TimeMin` as your feature variable and `PickupCount` as your response variable. Create a dictionary ...**
```
# your code here
# define k values
k_values = [1, 10, 75, 250, 500, 750, 1000]
# build a dictionary KNN models
KNNModels = {k: KNeighborsRegressor(n_neighbors=k) for k in k_values}
# fit each KNN model
for k_value in KNNModels:
KNNModels[k_value].fit(train_data[['TimeMin']], train_data[['PickupCount']])
```
**2.2 For each $k$ on the training set, overlay a scatter plot ...**
```
# your code here
# Generate predictions
knn_predicted_pickups_train = {k: KNNModels[k].predict(train_data[['TimeMin']]) for k in KNNModels}
knn_predicted_pickups_test = {k: KNNModels[k].predict(test_data[['TimeMin']]) for k in KNNModels}
# your code here
# Preferred to use a function if the process is identical and repeated with varying inputs
# Try to use functions in your homeworks to make things easier for yourself and more replicable
# Function to plot predicted vs actual for a given k and dataset
def plot_knn_prediction(ax, dataset, predictions, k, dataset_name= "Training"):
# scatter plot predictions
ax.plot(dataset['TimeMin'], predictions, '*', label='Predicted')
# scatter plot actual
ax.plot(dataset['TimeMin'], dataset['PickupCount'], '.', alpha=0.2, label='Actual')
# Set labels
ax.set_title("$k = {}$ on {} Set".format(str(k), dataset_name))
ax.set_xlabel('Time of Day in Minutes')
ax.set_ylabel('Pickup Count')
ax.legend()
# Plot predictions vs actual
# your code here
# Notice that nrows is set to the variable size. This makes the code more readable and adaptable
fig, axes = plt.subplots(nrows=len(k_values), ncols=2, figsize=(16,28))
fig.suptitle('Predictions vs Actuals', fontsize=14)
for i, k in enumerate(k_values):
plot_knn_prediction(axes[i][0], train_data, knn_predicted_pickups_train[k], k, "Training")
plot_knn_prediction(axes[i][1], test_data, knn_predicted_pickups_test[k], k, "Test")
fig.tight_layout(rect=[0,0.03,1,0.98])
```
**2.3 Report the $R^2$ score for the fitted models ...**
```
# your code here
knn_r2_train = {k : r2_score(train_data[['PickupCount']], knn_predicted_pickups_train[k]) for k in k_values}
knn_r2_test = { k : r2_score(test_data[['PickupCount']], knn_predicted_pickups_test[k]) for k in k_values}
# This format makes the display much more readable
knn_r2_df = pd.DataFrame(data = {"k" : tuple(knn_r2_train.keys()),
"Train R^2" : tuple(knn_r2_train.values()),
"Test R^2" : tuple(knn_r2_test.values())})
knn_r2_df
```
**2.4 Plot, in a single figure, the $R^2$ values from the model on the training and test set as a function of $k$**
```
# your code here
fig, axes = plt.subplots(figsize = (5,5))
axes.plot(knn_r2_df['k'], knn_r2_df['Train R^2'], 's-', label='Train $R^2$ Scores')
axes.plot(knn_r2_df['k'], knn_r2_df['Test R^2'], 's-', label='Test $R^2$ Scores')
axes.set_xlabel('k')
axes.set_ylabel('$R^2$ Scores')
# A generic title of this format (y vs x) is generally appropriate
axes.set_title("$R^2$ Scores vs k")
# Including a legend is very important
axes.legend();
```
**2.5**. Discuss the results:
1. If $n$ is the number of observations in the training set, what can you say about a k-NN regression model that uses $k = n$?
A k-NN regression model that used $k = n$ is the equivalent of using the mean of the response variable values for all the points of the dataset as a prediction model.
2. What does an $R^2$ score of $0$ mean?
An $R^2$ value of 0 indicates a model making predictions equivalently well to a model using a constant prediction of the data's mean (and as such explains none of the variation around the mean). In k-NN Regression, an example would be the model with $k = n$ or in this case $k = 1000$.
3. What would a negative $R^2$ score mean? Are any of the calculated $R^2$ you observe negative?
None of the calculated $R^2$ values in this case on the training set are negative. We see negative $R^2$ values for $k = 1$ and $k = 1000$ on the test set (although the test set $R^2$ value for $k = 1000$ is very close to 0). A negative $R^2$ value indicate a model making predictions less accurate than using a constant prediction (for any configuration of features) of the mean of all response variable values. Our observations of a highly negative $R^2$ score for $k = 1$ on the test set means that predictive value of the 1-NN model is very poor and 1-NN would be a worse model for our data than just taking the average value (of the test set). For $k = 1000$ the difference between the observed $R^2$ score on the test set and 0 is due to stochasticity and 1000-NN has a predictive power essentially equivalent to taking the average value on the training set as a prediction (in this particular case it so happens that 1000-NN is exactly the same model as using the average value of the training set for a prediction).
4. Do the training and test $R^2$ plots exhibit different trends? Describe.
The training and test plots of $R^2$ exhibit different trends, as for small $k$, the model overfits the data, so it achieves a very good $R^2$ on the training set and a very poor $R^2$ on the test data. At large $k$ values the model underfits. Although it performs equally well on the train and test data, it's not doing as well on either one as it did at a different value of $k$.
5. What is the best value of $k$? How did you come to choose this value? How do the corresponding training/test set $R^2$ values compare?
Based on test set $R^2$ scores, the best value of $k$ is 75 with a training set $R^2$ score of 0.445 and a test set score of 0.390. Note that *best* refers to performance on the test set, the set on which the model can be evaluated.
6. Use the plots of the predictions (in 2.2) to justify why your choice of the best $k$ makes sense (**Hint**: think Goldilocks).
A $k$ of 75 appears to be the most reasonable choice from these plots since the curve of fitted values describes the relationship in the scatter plots (both train and test) very well, but is not too jagged or jumpy (if $k$ is smaller) or too flattened out (if $k$ is larger). We are in the Goldilocks zone.
## <div class="exercise"> <b> Question 3 [25 pts] </b></div>
We next consider simple linear regression, which we know from lecture is a parametric approach for regression that assumes that the response variable has a linear relationship with the predictor. Use the `statsmodels` module for Linear Regression. This module has built-in functions to summarize the results of regression and to compute confidence intervals for estimated regression parameters.
**3.1**. Again choose `TimeMin` as your predictor and `PickupCount` as your response variable. Create an `OLS` class instance and use it to fit a Linear Regression model on the training set (`train_data`). Store your fitted model in the variable `OLSModel`.
**3.2**. Create a plot just like you did in 2.2 (but with fewer subplots): plot both the observed values and the predictions from `OLSModel` on the training and test set. You should have one figure with two subplots, one subplot for the training set and one for the test set.
**Hints**:
1. Each subplot should use different color and/or markers to distinguish Linear Regression prediction values from that of the actual data values.
2. Each subplot must have appropriate axis labels, title, and legend.
3. The overall figure should have a title.
**3.3**. Report the $R^2$ score for the fitted model on both the training and test sets.
**3.4**. Report the estimates for the slope and intercept for the fitted linear model.
**3.5**. Report the $95\%$ confidence intervals (CIs) for the slope and intercept.
**3.6**. Discuss the results:
1. How does the test $R^2$ score compare with the best test $R^2$ value obtained with k-NN regression? Describe why this is not surprising for these data.
2. What does the sign of the slope of the fitted linear model convey about the data?
3. Interpret the $95\%$ confidence intervals from 3.5. Based on these CIs is there evidence to suggest that the number of taxi pickups has a significant linear relationship with time of day? How do you know?
4. How would $99\%$ confidence intervals for the slope and intercept compare to the $95\%$ confidence intervals (in terms of midpoint and width)? Briefly explain your answer.
5. Based on the data structure, what restriction on the model would you put at the endpoints (at $x\approx0$ and $x\approx1440$)? What does this say about the appropriateness of a linear model?
### Answers
**3.1 Again choose `TimeMin` as your predictor and `PickupCount` as your response variable. Create a `OLS` class instance ...**
```
# your code here
# Look at these variables on their own - they format for both constant term and linear predictor
train_data_augmented = sm.add_constant(train_data['TimeMin'])
test_data_augmented = sm.add_constant(test_data['TimeMin'])
OLSModel = OLS(train_data['PickupCount'].values, train_data_augmented).fit()
# type(train_data['TimeMin'])
```
**3.2 Re-create your plot from 2.2 using the predictions from `OLSModel` on the training and test set ...**
```
# your code here
# OLS Linear Regression model training predictions
ols_predicted_pickups_train = OLSModel.predict(train_data_augmented)
# OLS Linear Regression model test predictions
ols_predicted_pickups_test = OLSModel.predict(test_data_augmented)
# your code here
# Function to plot predicted vs actual for a given k and dataset
def plot_ols_prediction(ax, dataset, predictions, dataset_name= "Training"):
# scatter plot predictions
ax.plot(dataset['TimeMin'], predictions, '*', label='Predicted')
# scatter plot actual
ax.plot(dataset['TimeMin'], dataset['PickupCount'], '.', alpha=0.2, label='Actual')
# Set labels
ax.set_title("{} Set".format(dataset_name))
ax.set_xlabel('Time of Day in Minutes')
ax.set_ylabel('Pickup Count')
ax.legend()
# your code here
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20,8))
fig.suptitle('Predictions vs Actuals', fontsize=14)
plot_ols_prediction(axes[0], train_data, ols_predicted_pickups_train, "Training")
plot_ols_prediction(axes[1], test_data, ols_predicted_pickups_test, "Test")
```
**3.3 Report the $R^2$ score for the fitted model on both the training and test sets.**
```
# your code here
r2_score_train = r2_score(train_data[['PickupCount']].values, ols_predicted_pickups_train)
r2_score_test = r2_score(test_data[['PickupCount']].values, ols_predicted_pickups_test)
print("R^2 score for training set: {:.4}".format(r2_score_train))
print("R^2 score for test set: {:.4}".format(r2_score_test))
```
**3.4 Report the slope and intercept values for the fitted linear model.**
```
## show summary
# your code here
OLSModel.summary()
# your code here
ols_intercept = OLSModel.params[0]
ols_slope = OLSModel.params[1]
print("Intecept: {:.3}".format(ols_intercept))
print("Slope: {:.3}".format(ols_slope))
```
**3.5 Report the $95\%$ confidence interval for the slope and intercept.**
```
# your code here
conf_int = OLSModel.conf_int()
# Doing it by hand would be something like 16.7506 +/- (1.96 * 1.058), and same process for slope
print("95% confidence interval for intercept: [{:.4}, {:.4}]".format(conf_int[0][0],conf_int[1][0]))
print("95% confidence interval for slope: [{:.4}, {:.4}]".format(conf_int[0][1],conf_int[1][1]))
```
**3.6 Discuss the results:**
*your answer here*
1. How does the test $R^2$ score compare with the best test $R^2$ value obtained with k-NN regression? Describe why this is not surprising for these data.
The test $R^2$ is lower for Linear Regression than for k-NN regression for all but the most suboptimal values of $k$ ($k \approx 0$ or $k \approx n$). This isn't surprising since there are various indicators that a linear regression model isn't an ideal model for this particular choice of data and feature space. This is not surprising because the scatterplot of data show a curve and not just a straight line.
2. What does the sign of the slope of the fitted linear model convey about the data?
The positive slope implies that the number of pickups increases throughout the day, on average. The slope is positive for all values within the confidence interval.
3. Interpret the $95\%$ confidence intervals from 3.5. Based on these CIs is there evidence to suggest that the number of taxi pickups has a significant linear relationship with time of day? How do you know?
As mentioned in the previous part, the confidence interval only contains positive values, and this suggests that 'no association' (a slope of zero) is not plausible. Also, the estimates for slope and intercept are reasonably precise. The intercept is estimated to fall between around 14 to 18 on data that ranges from 0-100, which reasonably small though certainly far from perfect. The slope, it seems, is very precise, estimated to be between .020 and .025. In practical terms, using the lower end would predict 29 pickups (plus the intercept) at 11:59pm and using the upper bounds would predict 36 pickups (plus the intercept) at 11:59 pm, which is a fairly tight range. Our uncertainty in the value of the slope is small enough to only moderately impact our overall uncertainty, even at the extremes of the data.
4. How would $99\%$ confidence intervals for the slope and intercept compare to the $95\%$ confidence intervals (in terms of midpoint and width)? Briefly explain your answer.
We'd expect a 99% confidence interval to be wider as it should allow for an even wider possibility of values that are believable, or consistent with the data. With increased confidence level, even more values become plausible so the interval is lengthened on both sides. The 99\% CI would be centered at the same place as the 95\% CI.<br>
5. Based on the data structure, what restriction on the model would you put at the endpoints (at $x\approx0$ and $x\approx1440$)? What does this say about the appropriateness of a linear model?
Looking at $x=0$ and $x=1440$, $y$ values should be the same because it’s only a minute difference in time. That’s not the case for the predicted $\hat{y}$ though. Since the line should be 'anchored' at the same place on the ends of the graph, only a line with zero slope is consistent with this situation.
# <div class="theme"> Outliers </div>
You may recall from lectures that OLS Linear Regression can be susceptible to outliers in the data. We're going to look at a dataset that includes some outliers and get a sense for how that affects modeling data with Linear Regression. **Note, this is an open-ended question, there is not one correct solution (or one correct definition of an outlier).**
## <div class="exercise"><b> Question 4 [25 pts] </b></div>
**4.1**. We've provided you with two files `outliers_train.csv` and `outliers_test.csv` corresponding to training set and test set data. What does a visual inspection of training set tell you about the existence of outliers in the data?
**4.2**. Choose `X` as your feature variable and `Y` as your response variable. Use `statsmodel` to create a Linear Regression model on the training set data. Store your model in the variable `OutlierOLSModel`.
**4.3**. You're given the knowledge ahead of time that there are 3 outliers in the training set data. The test set data doesn't have any outliers. You want to remove the 3 outliers in order to get the optimal intercept and slope. In the case that you're sure of the existence and number (3) of outliers ahead of time, one potential brute force method to outlier detection might be to find the best Linear Regression model on all possible subsets of the training set data with 3 points removed. Using this method, how many times will you have to calculate the Linear Regression coefficients on the training data?
**4.4** In CS109 we're strong believers that creating heuristic models is a great way to build intuition. In that spirit, construct an approximate algorithm to find the 3 outlier candidates in the training data by taking advantage of the Linear Regression residuals. Place your algorithm in the function `find_outliers_simple`. It should take the parameters `dataset_x` and `dataset_y`, and `num_outliers` representing your features, response variable values (make sure your response variable is stored as a numpy column vector), and the number of outliers to remove. The return value should be a list `outlier_indices` representing the indices of the `num_outliers` outliers in the original datasets you passed in. Run your algorithm and remove the outliers that your algorithm identified, use `statsmodels` to create a Linear Regression model on the remaining training set data, and store your model in the variable `OutlierFreeSimpleModel`.
**4.5** Create a figure with two subplots. The first is a scatterplot where the color of the points denotes the outliers from the non-outliers in the training set, and include two regression lines on this scatterplot: one fitted with the outliers included and one fitted with the outlier removed (all on the training set). The second plot should include a scatterplot of points from the test set with the same two regression lines fitted on the training set: with and without outliers. Visually which model fits the test set data more closely?
**4.6**. Calculate the $R^2$ score for the `OutlierOLSModel` and the `OutlierFreeSimpleModel` on the test set data. Which model produces a better $R^2$ score?
**4.7**. One potential problem with the brute force outlier detection approach in 4.3 and the heuristic algorithm you constructed 4.4 is that they assume prior knowledge of the number of outliers. In general you can't expect to know ahead of time the number of outliers in your dataset. Propose how you would alter and/or use the algorithm you constructed in 4.4 to create a more general heuristic (i.e. one which doesn't presuppose the number of outliers) for finding outliers in your dataset.
**Hints**:
1. Should outliers be removed one at a time or in batches?
2. What metric would you use and how would you use it to determine how many outliers to consider removing?
### Answers
**4.1 We've provided you with two files `outliers_train.txt` and `outliers_test.txt` corresponding to training set and test set data. What does a visual inspection of training set tell you about the existence of outliers in the data?**
```
# read the data
# your code here
outliers_train = pd.read_csv("data/outliers_train.csv")
outliers_test = pd.read_csv("data/outliers_test.csv")
outliers_train.describe()
# your code here
outliers_train.head()
# your code here
outliers_test.describe()
# your code here
outliers_test.head()
# scatter plot
# your code here
plt.scatter(outliers_train["X"],outliers_train["Y"]);
```
*your answer here*
The dataset seems to have a roughly linear trend with 3 really clear outliers: 2 in the upper-left and one in the lower-right of the scatterplot (they do not follow the pattern of the rest of the points).
**4.2 Choose `X` as your feature variable and `Y` as your response variable. Use `statsmodel` to create ...**
```
# your code here
# Reshape with -1 makes numpy figure out the correct number of rows
outliers_orig_train_X = outliers_train["X"].values.reshape(-1,1)
outliers_orig_train_Y = outliers_train["Y"].values.reshape(-1,1)
outliers_train_X = sm.add_constant(outliers_orig_train_X)
outliers_train_Y = outliers_orig_train_Y
outliers_orig_test_X = outliers_test["X"].values.reshape(-1,1)
outliers_orig_test_Y = outliers_test["Y"].values.reshape(-1,1)
outliers_test_X = sm.add_constant(outliers_orig_test_X)
outliers_test_Y = outliers_orig_test_Y
OutlierOLSModel = sm.OLS(outliers_train_Y, outliers_train_X).fit()
OutlierOLSModel.summary()
```
**4.3 One potential brute force method to outlier detection might be to find the best Linear Regression model on all possible subsets of the training set data with 3 points removed. Using this method, how many times will you have to calculate the Linear Regression coefficients on the training data?**
*your answer here*
There are 53 total observations in the training set. That means there are $\binom{53}{3}$ or 23,426 subsets in the training set with 3 points removed. We'll need to compute a Linear Regression model on each one and find the best one (presumably the one with the highest $R^2$ value on test).
**4.4 Construct an approximate algorithm to find the 3 outlier candidates in the training data by taking advantage of the Linear Regression residuals ...**
```
def find_outliers_simple(dataset_x, dataset_y, num_outliers):
# your code here
# calculate absolute value residuals
y_pred = sm.OLS(dataset_y, dataset_x).fit().predict().reshape(-1,1)
residuals = np.abs(dataset_y - y_pred)
# use argsort to order the indices by absolute value of residuals
# get all but the 3 highest residuals
outlier_indices = np.argsort(residuals, axis=0).flatten()[-num_outliers:]
return list(outlier_indices)
# get outliers
# your code here
simple_outlier_indices = find_outliers_simple(outliers_train_X, outliers_orig_train_Y, 3)
print("Outlier indices: {} ".format(simple_outlier_indices))
# get outliers
simple_outliers_x = outliers_orig_train_X[simple_outlier_indices]
simple_outliers_y= outliers_orig_train_Y[simple_outlier_indices]
# new_dataset_indices are the complements of our outlier indices in the original set
new_dataset_indices = list(set(range(len(outliers_orig_train_X))) - set(simple_outlier_indices))
new_dataset_indices.sort()
# get outliers free dataset
simple_outliers_free_x = outliers_train_X[new_dataset_indices]
simple_outliers_free_y = outliers_train_Y[new_dataset_indices]
print("Outlier X values: {} ".format(simple_outliers_x))
print("Outlier Y values: {} ".format(simple_outliers_y))
simple_outliers_y
# calculate outlier model
# your code here
OutlierFreeSimpleModel = sm.OLS(simple_outliers_free_y, simple_outliers_free_x).fit()
OutlierFreeSimpleModel.summary()
```
**4.5 Create a figure with two subplots...**
```
# plot
# your code here
fig, axs = plt.subplots(1,2, figsize=(14,6))
ticks = np.linspace(-2.5,2.5, 100)
regression_line_no = ticks*OutlierOLSModel.params[1] + OutlierOLSModel.params[0]
regression_line = ticks*OutlierFreeSimpleModel.params[1] + OutlierFreeSimpleModel.params[0]
axs[0].scatter(outliers_train["X"],outliers_train["Y"], label="actual values")
axs[0].scatter(simple_outliers_x, simple_outliers_y, color='orange', marker='o', s=52, label="outliers")
axs[0].plot(ticks, regression_line_no, color='orange', label="model prediction with outliers")
axs[0].plot(ticks, regression_line, label="model prediction without outliers")
axs[0].set_title('Comparison of model with and without outliers in train set')
axs[0].set_xlabel("X")
axs[0].set_ylabel("Y")
axs[0].set_ylim((-500,500))
axs[0].legend()
axs[1].scatter(outliers_test["X"],outliers_test["Y"], label="actual values")
axs[1].plot(ticks, regression_line_no, color='orange', label="model prediction with outliers")
axs[1].plot(ticks, regression_line, label="model prediction without outliers")
axs[1].set_title('Comparison of model with and without outliers in test set')
axs[1].set_xlabel("X")
axs[1].set_ylabel("Y")
axs[1].set_ylim((-500,500))
axs[1].legend();
```
*your answer here*
The model with outliers removed fits the test data more closely: the orange line looks to have some systematic bias in predictions at very low or very high values of X.
**4.6 Calculate the $R^2$ score for the `OutlierOLSModel` and the `OutlierFreeSimpleModel` on the test set data. Which model produces a better $R^2$ score?**
```
# your code here
r2_with_outliers = r2_score(outliers_test_Y, OutlierOLSModel.predict(sm.add_constant(outliers_test_X)))
r2_wo_outliers = r2_score(outliers_test_Y, OutlierFreeSimpleModel.predict(sm.add_constant(outliers_test_X)))
print("R^2 score with outliers: {:.4}".format(r2_with_outliers))
print("R^2 score with outliers removed: {:.4}".format(r2_wo_outliers))
```
The version with outliers removed is better.
**4.7 Propose how you would alter and/or use the algorithm you constructed in 4.4 to create a more general heuristic (i.e. one which doesn't presuppose the number of outliers) for finding outliers in your dataset.**
Find outliers one at a time. Find the worst outlier which improves the $R^2$ the most by removing it (on the test set), and keep removing outliers until the improvement to test $R^2$ is neglibigle less than some tolerance threshold. If you used the train set to compare $R^2$, you could easily remove the outlier and see a huge drop in $R^2$ just because it was the main driving force in MSM (the variance explained by the model) or MST (the original variance ignoring the model). Also, it would be good to specify some minimum number of points to be remaining (no more than half the observations should be considered outliers).
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.