text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Classification & Regression with Trees
**Aim**: The aim of this notebook is to provide code-based examples for the implementation of tree based algorithms using scikit-learn.
## Table of contents
1. Decision Tree Classifier
2. Random Forest Classifier
3. AdaBoost Classifier
4. Decision Tree Regressor
5. Random Forest Regressor
6. Gradient Boosted Trees Regressor
7. Ensemble Classifier
## Package Requirements
```
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
from sklearn import tree
```
## Decision Tree Classifier
**Reading in the dataset**
```
df = pd.read_csv('fraud_prediction.csv')
df = df.drop(['Unnamed: 0'], axis = 1)
```
**Splitting the data into training & test sets**
```
#Creating the features
features = df.drop('isFraud', axis = 1).values
target = df['isFraud'].values
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42,
stratify = target)
```
**Building the initial decision tree classifier**
```
#Initializing the DT classifier
dt = DecisionTreeClassifier(criterion = 'gini', random_state = 50)
#Fitting on the training data
dt.fit(X_train, y_train)
#Testing accuracy on the test data
dt.score(X_test, y_test)
```
**Hyper-parameter Optimization**
```
#Creating a grid of different hyper-parameters
grid_params = {
'max_depth': [1,2,3,4,5,6],
'min_samples_leaf': [0.02,0.04, 0.06, 0.08]
}
#Building a 10 fold Cross Validated GridSearchCV object
grid_object = GridSearchCV(estimator = dt, param_grid = grid_params, scoring = 'accuracy', cv = 10, n_jobs = -1)
#Fitting the grid to the training data
grid_object.fit(X_train, y_train)
#Extracting the best parameters
grid_object.best_params_
#Extracting the best model
dt = grid_object.best_estimator_
```
**Visualizing the decision tree**
```
#Reading in the data
df = pd.read_csv('fraud_prediction.csv')
df = df.drop(['Unnamed: 0'], axis = 1)
#Creating the features
features = df.drop('isFraud', axis = 1).values
target = df['isFraud'].values
#Initializing the DT classifier
dt = DecisionTreeClassifier(criterion = 'gini', random_state = 50, max_depth= 5)
#Fitting the classifier on the data
dt.fit(features, target)
#Extracting the feature names
feature_names = df.drop('isFraud', axis = 1)
#Creating the tree visualization
data = tree.export_graphviz(dt, out_file=None, feature_names= feature_names.columns.values, proportion= True)
graph = pydotplus.graph_from_dot_data(data)
# Show graph
Image(graph.create_png())
```
## Random Forest Classifier
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
```
**Splitting the data into training and test sets**
```
#Creating the features
features = df.drop('isFraud', axis = 1).values
target = df['isFraud'].values
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42,
stratify = target)
#Initiliazing an Random Forest Classifier with default parameters
rf_classifier = RandomForestClassifier(random_state = 50)
#Fitting the classifier on the training data
rf_classifier.fit(X_train, y_train)
#Extracting the scores
rf_classifier.score(X_test, y_test)
```
**Hyper-parameter tuning**
```
#Creating a grid of different hyper-parameters
grid_params = {
'n_estimators': [300,400,500],
'max_depth': [1,2,3],
'min_samples_leaf': [0.05, 0.1, 0.2]
}
#Building a 3 fold Cross-Validated GridSearchCV object
grid_object = GridSearchCV(estimator = rf_classifier, param_grid = grid_params, scoring = 'accuracy',
cv = 3, n_jobs = -1)
#Fitting the grid to the training data
grid_object.fit(X_train, y_train)
#Extracting the best parameters
grid_object.best_params_
#Extracting the best model
rf_best = grid_object.best_estimator_
```
## Adaboost Classifier
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
```
**Splitting the data into training & testing sets**
```
#Creating the features
features = df.drop('isFraud', axis = 1).values
target = df['isFraud'].values
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42,
stratify = target)
```
**Building the AdaBoost Classifier**
```
#Initialize a tree (Decision Tree with max depth = 1)
dt = DecisionTreeClassifier(max_depth=1, random_state = 42)
#Initialize an AdaBoost classifier with the tree as the base estimator
ada_boost = AdaBoostClassifier(base_estimator = dt, n_estimators=100)
#Fitting the AdaBoost classifier to the training set
ada_boost.fit(X_train, y_train)
#Extracting the accuracy scores from the classifier
ada_boost.score(X_test, y_test)
```
**Hyper-paramter tuning**
```
#Creating a grid of hyper-parameters
grid_params = {
'n_estimators': [100,200,300]
}
#Building a 3 fold CV GridSearchCV object
grid_object = GridSearchCV(estimator = ada_boost, param_grid = grid_params, scoring = 'accuracy', cv = 3, n_jobs = -1)
#Fitting the grid to the training data
grid_object.fit(X_train, y_train)
#Extracting the best parameters
grid_object.best_params_
#Extracting the best model
ada_best = grid_object.best_estimator_
```
## Decision Tree Regressor
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
#Creating the features
features = df.drop('amount', axis = 1).values
target = df['amount'].values
#Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42)
#Building the decison tree regressor
dt_reg = DecisionTreeRegressor(max_depth = 10, min_samples_leaf = 0.2, random_state= 50)
#Fitting the tree to the training data
dt_reg.fit(X_train, y_train)
```
**Visualizing the decision tree**
```
#Extracting the feature names
feature_names = df.drop('amount', axis = 1)
#Creating the tree visualization
data = tree.export_graphviz(dt_reg, out_file=None, feature_names= feature_names.columns.values, proportion= True)
graph = pydotplus.graph_from_dot_data(data)
# Show graph
Image(graph.create_png())
```
## Random Forest Regressor
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
#Creating the features
features = df.drop('amount', axis = 1).values
target = df['amount'].values
#Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42)
#Initiliazing an Random Forest Regressor with default parameters
rf_reg = RandomForestRegressor(max_depth = 10, min_samples_leaf = 0.2, random_state = 50)
#Fitting the Regressor on the training data
rf_reg.fit(X_train, y_train)
```
## Gradient Boosted Trees for regression
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
#Creating the features
features = df.drop('amount', axis = 1).values
target = df['amount'].values
#Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42)
```
**Building the Gradient Boosted Regressor**
```
#Initiliazing an Gradient Boosted Regressor with default parameters
gb_reg = GradientBoostingRegressor(max_depth = 5, n_estimators = 100, learning_rate = 0.1, random_state = 50)
#Fitting the regressor on the training data
gb_reg.fit(X_train, y_train)
#Creating the features
features = df.drop('isFraud', axis = 1).values
target = df['isFraud'].values
#Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42)
```
## Ensemble Classifier
```
#Reading in the dataset
df = pd.read_csv('fraud_prediction.csv')
#Dropping the index
df = df.drop(['Unnamed: 0'], axis = 1)
#Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size = 0.3, random_state = 42)
```
**Building the DT & RF classifier to include in the Voting Classifier**
```
#Initializing the DT classifier
dt = DecisionTreeClassifier(criterion = 'gini', random_state = 50)
#Fitting on the training data
dt.fit(X_train, y_train)
#Initiliazing an Random Forest Classifier with default parameters
rf_classifier = RandomForestClassifier(random_state = 50)
#Fitting the classifier on the training data
rf_classifier.fit(X_train, y_train)
#Creating a list of models
models = [('Decision Tree', dt), ('Random Forest', rf_classifier)]
#Initialize a voting classifier
voting_model = VotingClassifier(estimators = models)
#Fitting the model to the training data
voting_model.fit(X_train, y_train)
#Evaluating the accuracy on the test data
voting_model.score(X_test, y_test)
```
| github_jupyter |
# Importiere benötigte Bibliotheken
```
import pandas as pd
import numpy as np
import ast
from collections import defaultdict, OrderedDict
import matplotlib.pyplot as plt
pd.set_option('display.max_colwidth', 50)
```
# Importiere das Datenset
```
dataset = pd.read_csv("./jupyterTestFrame.csv")
#Vollständiges Datenset
#dataset = pd.read_csv("./frame_v3.csv")
# Zeige die ersten 5 Zeilen des Datensets
dataset.head() # Alternative: print(dataset.head())
```
# Allgemeine Statistiken
### Größe des Datensets (Anzahl Zeilen)
```
len(dataset.index)
```
### Einzigartige Einträge pro Spalte
```
plotdata = [len(dataset.patent_application_id.unique()),
len(dataset.patent_citation_id.unique()),
len(dataset.original_cited_patent_dnum.unique()),
len(dataset.original_cited_patent_docNumber.unique()),
# len(dataset.original_cited_patent_country.unique()), # alles gleich, Achtung: Diese Spalte ist nicht sonderlich aussagekräftig, da sich exakte Aufzählungen wiederholen müssen
# len(dataset.application_claim_text.unique()), # fast alles unique, Achtung: Diese Spalte ist nicht sonderlich aussagekräftig, da sich exakte Aufzählungen wiederholen müssen
len(dataset.application_claim_number.unique()),
len(dataset.extracted_paragraph_column_of_citation.unique()),
len(dataset.actual_used_patent_dnum_application_number_for_paragraph_extraction.unique()),
len(dataset.novelty_reducing_paragraphs.unique())] # Achtung: Diese Spalte ist nicht sonderlich aussagekräftig, da sich exakte Aufzählungen wiederholen müssen
# all_labels = ['patent_application_id', 'patent_citation_id', 'original_cited_patent_dnum','original_cited_patent_docNumber','original_cited_patent_country','application_claim_text','application_claim_number','extracted_paragraph_column_of_citation','actual_used_patent_dnum_application_number_for_paragraph_extraction','novelty_reducing_paragraphs']
labels = ['patent_application_id', 'patent_citation_id', 'original_cited_patent_dnum','original_cited_patent_docNumber','application_claim_number','extracted_paragraph_column_of_citation','actual_used_patent_dnum_application_number_for_paragraph_extraction','novelty_reducing_paragraphs']
data = pd.DataFrame(plotdata, index=labels, columns=['Unique Values'])
ax = data.plot(kind="bar",use_index=True, rot=90, xticks=range(0,max(plotdata), int(max(plotdata)/10)),figsize=(15,5))
# Annotiere die Balken zusätzlich mit ihren jeweiligen Werten als Label
for i, label in enumerate(list(data.index)):
sum = data.loc[label]['Unique Values']
ax.annotate(str(sum), (i-0.09, sum+1.5))
ax.plot(figsize=(10,50))
```
### Anzahl der neuheitsschädlichen Paragraphen in 'novelty_reducing_paragraphs' vs. Anzahl der extrahierten Paragraphen in 'extracted_paragraph_column_of_citation'
```
# Zähle neuheitsschädliche Paragraphen in Liste
counting_novelty_reducing_paragraphs=defaultdict(int)
for element in dataset.novelty_reducing_paragraphs.apply(ast.literal_eval):
counting_novelty_reducing_paragraphs[len(element)] += 1
#print(counting_novelty_reducing_paragraphs)
sorted_counting_novelty_reducing_paragraphs=[(k,counting_novelty_reducing_paragraphs[k]) for k in sorted(counting_novelty_reducing_paragraphs)]
#print(sorted_counting_novelty_reducing_paragraphs)
#ACHTUNG: LÖSCHE HIER FEHLERHAFTEN WERT, DA ER BALKENDIAGRAMM ZERSCHIEßT!!!!
sorted_counting_novelty_reducing_paragraphs.remove((404520,6))
#ACHTUNG: LÖSCHE HIER FEHLERHAFTEN WERT, DA ER BALKENDIAGRAMM ZERSCHIEßT!!!!
# Zähle Anzahl tatsächlich extrahierter Paragraphen
counting_extracted_paragraph_column_of_citation=defaultdict(int)
for row in dataset.extracted_paragraph_column_of_citation:
row_count=row.count("<p id")
counting_extracted_paragraph_column_of_citation[row_count] += 1
#print(counting_extracted_paragraph_column_of_citation)
#print([(k, counting_extracted_paragraph_column_of_citation[k]) for k in sorted(counting_extracted_paragraph_column_of_citation, key=counting_extracted_paragraph_column_of_citation.get, reverse=False)])
sorted_counting_extracted_paragraph_column_of_citation=[(k,counting_extracted_paragraph_column_of_citation[k]) for k in sorted(counting_extracted_paragraph_column_of_citation)]
#print(sorted_counting_extracted_paragraph_column_of_citation)
######################## Plot1 für novelty reducing ########################
labels = [i[0] for i in sorted_counting_novelty_reducing_paragraphs]
plotdata = [i[1] for i in sorted_counting_novelty_reducing_paragraphs]
labels2 = [i[0] for i in sorted_counting_extracted_paragraph_column_of_citation]
plotdata2 = [i[1] for i in sorted_counting_extracted_paragraph_column_of_citation]
data = pd.DataFrame(plotdata, index=labels, columns=['Novelty Reducing Paragraphes'])
#ax = data.plot(kind="bar",use_index=True, rot=90, xticks=range(0,max(plotdata), int(max(plotdata)/10)),figsize=(15,5))
data2 = pd.DataFrame(plotdata2, index=labels2, columns=['Eventually Extracted Paragraphes'])
#ax2 = data.plot(kind="bar",use_index=True, rot=90, xticks=range(0,max(plotdata2), int(max(plotdata2)/10)),figsize=(15,5))
# Annotiere die Balken zusätzlich mit ihren jeweiligen Werten als Label
#for i, label in enumerate(list(data.index)):
# sum = data.loc[label]['Novelty Reducing Paragraphes']
# ax.annotate(str(sum), (i-1, sum+1))
fig = plt.figure()
ax = fig.add_subplot(2,1,1)
data.plot(ax=ax,kind="bar",use_index=True, rot=90, xticks=range(0,max(plotdata), int(max(plotdata)/10)),figsize=(20,10))
#plt.title('A tale of 2 subplots')
#plt.ylabel('Damped oscillation')
ax2 = fig.add_subplot(2,1,2)
data2.plot(ax=ax2,kind="bar",use_index=True, rot=90, xticks=range(0,max(plotdata2), int(max(plotdata2)/10)),figsize=(20,10))
#plt.xlabel('time (s)')
#plt.ylabel('Undamped')
```
# Funktionsdefinition
```
def groupByQuery(eintrag,eintrag_spalte):
return dataset.groupby(eintrag_spalte).get_group(eintrag)
```
# Beispiele für die Datenexploration
## Liefere alle Einträge zu einem Patentantrag (oder einem anderen Eintrag)
#### Konkret: Liefere mir alle Spalten, die zum Patentantrag '2500486A120120919' (patent_application_id) gehören.
```
# groupbyQuery(eintrag, eintrag_spalte) ist eine Funktion, die unser Datenset nach einer Spalte gruppiert, deren Namen in
# "eintrag_spalte" übergeben wird. Anschließend werden nur jene Zeilen ausgegeben, die in der Spalte die Variable "eintrag" enthalten.
# Ein Beispiel:
result = groupByQuery("2500486A120120919", "patent_application_id")
print(result)
# Mit"result = ...." speichern wir das Ergebnis in ein Objekt, dass wir weiterverwenden können.
# Zum Beispiel können wir nun das Ergebnis ausgeben mit print(result).
# Möchten wir nicht alle Spalten in unserer Ausgabe erhalten, so können wir unser Ergebnis
# kürzen.
my_selected_columns = result[["patent_application_id","patent_citation_id","extracted_paragraph_column_of_citation"]]
print(my_selected_columns)
# Dies funktioniert natürlich auch für das ursprüngliche Datenset
another_selected_columns = result[["patent_application_id","application_claim_text","extracted_paragraph_column_of_citation","novelty_reducing_paragraphs"]]
print(another_selected_columns)
# Da Werte, wie z.B. die extrahierten Paragraphen oft zu lang sind für eine gute Darstellung, werden Sie abgeschnitten.
# Dies kann abgestellt werden mit folgender Option
pd.set_option('display.max_colwidth', -1)
# Wenn wir uns nun die letzte Ausgabe erneut anzeigen:
print(another_selected_columns)
```
## Auswahl von Zeilen mit Zeilennummerierung (Index)
```
# Da dies immer noch sehr übersichtlich ist, wollen wir nun nur die erste Zeile des Ergebnis
print(another_selected_columns.iloc[0])
# Oder z.B. die zweite bis vierte Zeile mithilfe der Syntax iloc[Anfang:Ende]. Beachte dabei, dass wir bei 0
# anfangen zu zählen und das Ende-1 ausgegeben wird. Haben wir also die Zeilen 0,1,2,3,4 gibt [1:4] uns die
# Zeilen mit dem Index 1,2,3 aus (umgangssprachlich die zweite bis vierte Zeile).
auswahl = another_selected_columns.iloc[1:4]
# Zur besseren Übersicht, schalten wir die Kurzansicht wieder ein
pd.set_option('display.max_colwidth', 50)
# Und lassen uns die Ausgabe nochmals anzeigen
print(auswahl)
# Wir können auch in einzelnen Zellen suchen und uns etwaige Zeilen ausgeben lassen
# Syntax datenobject[datenobject[Spalte in der ich suche].str.contains(Suchphrase)]
# Gebe alle Zeilen aus meiner Auswahl aus, die einen Paragraphen Nummer 30 in der Spalte 'novelty_reducing_paragraphs'
# als neuheitsschädlich beinhalten
print(auswahl[auswahl['novelty_reducing_paragraphs'].str.contains("30")])
# Gebe alle Zeilen aus meiner Auswahl aus, die in der Spalte 'application_claim_text' den Text "to any" beinhalten
pd.set_option('display.max_colwidth', -1) # Volle Breite zum Lesen der Inhalte
print(auswahl[auswahl['application_claim_text'].str.contains("to any")])
pd.set_option('display.max_colwidth', 50) # Zurücksetzen der Option
```
# Ausgabe in andere Dateiformate
### Nun haben wir eine mögliche interessante Auswahl getroffen und wollen sie wiederum in ein anderes Ausgabeformat exportieren, um sie weiterzuverwenden oder zu speichern.
## CSV Export
```
# Syntax: nameDesObjekts.to_csv(dateipfad+Name, index=false)
# Nach Ausführung dieses Feldes wird die Datei im gleichen Verzeichnis wie dieses Notebook erstellt.
# Es wird keine zusätzliche Bestätigung ausgegeben.
# Die verkürzte Schreibweise betrifft nur die Anzeige, es werden also immer vollständige Daten in Dateien geschrieben.
auswahl.to_csv("./data_out.csv",index=False)
```
## HTML Export
```
# Syntax: nameDesObjekts.to_html(dateipfad+Name, index=false)
auswahl.to_html('./data_out.html', index=False)
```
| github_jupyter |
```
import numpy as np
import json
import matplotlib.pyplot as plt
from collections import OrderedDict
from pprint import pprint
import matplotlib
file_name='./rgbjpg/skeletons2D.txt'
file_mobilenet='./Données/rgb_transformed.txt'
text=open(file_name,'r')
with open(file_mobilenet) as f2:
dataMobilenet = json.load(f2, object_pairs_hook=OrderedDict)
dataKinect={}
positions={}
body_Parts={2:'mShoulder',3:'Head',4:'lShoulder',5:'lElbow',6:'lWrist',8:'rShoulder',9:'rElbow',10:'rWrist',12:'lHip',13:'lKnee',14:'lAnkle',16:'rHip',17:'rKnee',18:'rAnkle'}
lines = text.readlines()
frame_number=1
for line in lines:
list_line=line.split(' ')
positions[str(frame_number)]={}
for bPart in body_Parts.keys():
x=float(list_line[2*bPart])/1000
y=-float(list_line[2*bPart+1])/1000
positions[str(frame_number)][body_Parts[bPart]]=[0,x,y]
frame_number+=1
dataKinect['positions']=positions
with open("./Données/rgbKinect.txt", 'w') as outfile:
json.dump(dataKinect, outfile, sort_keys = True, indent = 4,
ensure_ascii = False)
common_body_parts=['Head', 'lAnkle', 'lElbow', 'lHip', 'lKnee', 'lShoulder', 'lWrist', 'mShoulder', 'rAnkle', 'rElbow', 'rHip', 'rKnee', 'rShoulder', 'rWrist']
mobilenet_maze={'Head':['mShoulder'],'mShoulder':['rShoulder','lShoulder','Head'],'rShoulder':['mShoulder','rElbow'],
'rElbow':['rShoulder','rWrist'],'rWrist':['rElbow'],'lShoulder':['mShoulder','lElbow'],
'lElbow':['lShoulder','lWrist'],'lWrist':['lElbow'],'rHip':['mShoulder','rKnee'],
'rKnee':['rHip','rAnkle'],'rAnkle':['rKnee'],'lHip':['mShoulder','lKnee'],
'lKnee':['lHip','lAnkle'],'lAnkle':['lKnee']}
Kinect_maze={'Head':['mShoulder'], 'lAnkle':['lKnee'], 'lWrist':['lElbow'],
'lKnee':['lAnkle','lHip'], 'lShoulder':['lElbow','mShoulder'],
'lElbow':['lShoulder','lWrist'], 'lHip':['lKnee','mShoulder'],
'rAnkle':['rKnee'], 'rWrist':['rElbow'], 'rKnee':['rAnkle','rHip'],
'rShoulder':['rElbow','mShoulder'], 'rElbow':['rWrist','rShoulder'],
'rHip':['rKnee','mShoulder'], 'mShoulder':['Head','rShoulder','lShoulder']}
positions3=dataMobilenet['positions']
i=40
first_frame=positions[str(sorted([int(pos) for pos in list(positions.keys())])[i])]
mobilenet_pos=positions3[str(sorted([float(pos) for pos in list(positions3.keys())])[i])]
x1=[]
x3=[]
y1=[]
y3=[]
z1=[]
for bPart in common_body_parts:
x1.append((first_frame[bPart][1]))
y1.append(first_frame[bPart][2])
z1.append(first_frame[bPart][0])
for bPart in mobilenet_pos.keys():
x3.append(mobilenet_pos[bPart][0])
y3.append(mobilenet_pos[bPart][1])
#Kinect
plt.plot(x1,y1,'ro')
for point in Kinect_maze:
connected_points=Kinect_maze[point]
for p in connected_points:
plt.plot([first_frame[p][1],first_frame[point][1]],[first_frame[p][2],first_frame[point][2]],color='red')
#Mobilenet
plt.plot(x3,y3,'bo')
for point in mobilenet_maze:
if point in mobilenet_pos:
connected_points=mobilenet_maze[point]
for p in connected_points:
if p in mobilenet_pos:
plt.plot([mobilenet_pos[p][0],mobilenet_pos[point][0]],[mobilenet_pos[p][1],mobilenet_pos[point][1]],color='blue')
#plt.plot(mobilenet_pos['rWrist'][0],mobilenet_pos['rWrist'][1],'go')
#plt.plot(first_frame['rWrist'][1],first_frame['rWrist'][2],'bo')
#plt.plot([mobilenet_pos['lWrist'][0],first_frame['lWrist'][1]],[mobilenet_pos['lWrist'][1],first_frame['lWrist'][2]])
plt.show()
positions_Kinect=dataKinect['positions']
positions_Mobilenet=dataMobilenet['positions']
#Connecting body parts of the Xsens with Kinect
body_parts_Mobilenet={"Head":"Head","lAnkle":"rAnkle",'lElbow':'rElbow', 'lHip':'rHip', 'lKnee':'rKnee',
'lShoulder':'rShoulder', 'lWrist':'rWrist', 'mShoulder':'mShoulder', 'rAnkle':'lAnkle',
'rElbow':'lElbow', 'rHip':'lHip', 'rKnee':'lKnee', 'rShoulder':'lShoulder',
'rWrist':'lWrist'}
#Body Parts of the Kinect
bPartsKinect=list(list(positions_Kinect.values())[0].keys())
common_body_parts=['Head', 'lAnkle', 'lElbow', 'lHip', 'lKnee', 'lShoulder', 'lWrist', 'mShoulder', 'rAnkle',
'rElbow', 'rHip', 'rKnee', 'rShoulder', 'rWrist']
#Filling the variance dictionnary including the variance between the three algorithms for each body part in all times
Variances={}
#For each time we create a new dictionary containing all body parts and their variances
for time in positions_Kinect.keys():
Variances[str(float(time))]={}
#Filling for each body part
for bPart in common_body_parts:
if body_parts_Mobilenet[bPart] in positions_Mobilenet[str(float(time))].keys():
var=np.var((positions_Kinect[time][bPart][1:],positions_Mobilenet[str(float(time))][body_parts_Mobilenet[bPart]]))
Variances[str(float(time))][bPart]=var
else:
Variances[str(float(time))][bPart]=0
#Plot of the evolution of the variance of the tree distances for a body part
body_part='rWrist'
bPart=body_parts_Mobilenet[body_part]
Times=list(Variances.keys())
Times_float=[]
for time in Times:
Times_float.append(float(time))
Times_float=sorted(Times_float)
Var_rWrist=[]
Var_rElbow=[]
Var_lWrist=[]
Var_lElbow=[]
Var_rShoulder=[]
Var_lShoulder=[]
for time in Times_float:
Var_rWrist.append(Variances[str(time)][body_parts_Mobilenet['rWrist']])
Var_rElbow.append(Variances[str(time)][body_parts_Mobilenet['rElbow']])
Var_lWrist.append(Variances[str(time)][body_parts_Mobilenet['lWrist']])
Var_lElbow.append(Variances[str(time)][body_parts_Mobilenet['lElbow']])
Var_rShoulder.append(Variances[str(time)][body_parts_Mobilenet['rShoulder']])
Var_lShoulder.append(Variances[str(time)][body_parts_Mobilenet['lShoulder']])
#plt.plot(Times_float,Var_rWrist,label='rWrist')
#plt.plot(Times_float,Var_rElbow,color='red',label='rElbow')
plt.plot(Times_float,Var_lWrist,color='green',label='lWrist')
#plt.plot(Times_float,Var_lElbow,label='lElbow')
#plt.plot(Times_float,Var_rShoulder,label='rShoulder')
#plt.plot(Times_float,Var_lShoulder,label='lShoulder',color='black')
plt.title('Variance de la Mobilenet, Kinect et Xsens')
plt.legend()
fig = matplotlib.pyplot.gcf()
fig.savefig('./Données/Courbes/Variance pour %s.jpg'%body_part)
plt.show()
bPart=body_parts_Mobilenet[body_part]
x_bPart_valuesK=[]
y_bPart_valuesK=[]
x_bPart_valuesM=[]
y_bPart_valuesM=[]
Times_float=[]
Times=list(Variances.keys())
Times_float=[]
for time in Times:
Times_float.append(float(time))
Times_float=sorted(Times_float)
MbPart=body_parts_Mobilenet[bPart]
for time in Times_float:
xK=positions_Kinect[str(int(time))][bPart][1]
yK=positions_Kinect[str(int(time))][bPart][2]
x_bPart_valuesK.append(xK)
y_bPart_valuesK.append(yK)
if body_parts_Mobilenet[bPart] in positions_Mobilenet[str(float(time))].keys():
xM=positions_Mobilenet[str(time)][body_parts_Mobilenet[bPart]][0]
yM=positions_Mobilenet[str(time)][body_parts_Mobilenet[bPart]][1]
else:
xM=-1
yM=-1
x_bPart_valuesM.append(xM)
y_bPart_valuesM.append(yM)
#plt.plot(Times_float,y_bPart_valuesX,'green',label='Xsens')
#plt.plot(Times_float,y_bPart_valuesM,'blue',label='Mobilenet')
#plt.plot(Times_float,y_bPart_valuesK,'red',label='Kinect')
#plt.legend()
#plt.title("y values after interpolation %s"%body_part)
#plt.show()
plt.plot(Times_float,y_bPart_valuesM,'blue',label='Mobilenet')
plt.plot(Times_float,y_bPart_valuesK,'red',label='Kinect')
plt.legend()
plt.title("y values %s"%body_part)
plt.show()
plt.plot(Times_float,x_bPart_valuesM,'blue',label='Mobilenet')
plt.plot(Times_float,x_bPart_valuesK,'red',label='Kinect')
plt.legend()
plt.title("x values %s"%body_part)
plt.show()
```
| github_jupyter |
# Road Following - Live demo
In this notebook, we will use model we trained to move jetBot smoothly on track.
### Load Trained Model
We will assume that you have already downloaded ``best_steering_model_xy.pth`` to work station as instructed in "train_model.ipynb" notebook. Now, you should upload model file to JetBot in to this notebooks's directory. Once that's finished there should be a file named ``best_steering_model_xy.pth`` in this notebook's directory.
> Please make sure the file has uploaded fully before calling the next cell
Execute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
```
import torchvision
import torch
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
```
Next, load the trained weights from the ``best_steering_model_xy.pth`` file that you uploaded.
```
model.load_state_dict(torch.load('best_steering_model_xy.pth'))
```
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
```
device = torch.device('cuda')
model = model.to(device)
model = model.eval().half()
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import ipywidgets.widgets as widgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
image_widget = widgets.Image(format='jpeg', width=224, height=224)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot(driver_board = "dfrobot")
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
slider_box = ipywidgets.VBox([speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider])
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
y_box = ipywidgets.HBox([y_slider, speed_slider])
xy_box = ipywidgets.VBox([y_box,x_slider, steering_slider])
final_box = ipywidgets.HBox([xy_box,slider_box])
display(final_box)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
cap = cv2.VideoCapture(1)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,480)
try:
while True:
ret,frame = cap.read(
new_color_image = cv2.resize(frame, (224, 224), interpolation=cv2.INTER_CUBIC)
xy = model(preprocess(new_color_image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
#image_widget.value = cv2.imencode('.jpg',new_color_image)[1].tobytes()
finally:
cap.release()
robot.stop()
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| github_jupyter |
# Language modeling approaches
Language Models (LMs) estimate the probability of different linguistic units: symbols, tokens, token sequences.
We see language models in action every day - look at some examples. Usually models in large commercial services are a bit more complicated than the ones we will discuss today, but the idea is the same: if we can estimate probabilities of words/sentences/etc, we can use them in various, sometimes even unexpected, ways.
We, humans, already have some feeling of "probability" when it comes to natural language. For example, when we talk, usually we understand each other quite well (at least, what's being said). We disambiguate between different options which sound similar without even realizing it!
But how a machine is supposed to understand this? A machine needs a language model, which estimates the probabilities of sentences. If a language model is good, it will assign a larger probability to a correct option.
Read [this](https://lena-voita.github.io/nlp_course/language_modeling.html) article to understand the concept of `language models` in depth.
## **Masked language modeling**
**Masked language modeling** is the task of training a model on input (a sentence with some masked tokens) and obtaining the output as the whole sentence with the masked tokens filled. But how and why does it help a model to obtain better results on downstream tasks such as classification? The answer is simple: if the model can do a cloze test (a linguistic test for evaluating language understanding by filling in blanks), then it has a general understanding of the language itself. For other tasks, it has been pretrained (by language modeling) and will perform better.
<p><center><img src='_images/US780867_5.png'></center><p>
Here's an example of a cloze test:
*George Washington was the first President of the ___ States.*
It is expected that *United* should fill in the blank. For a masked language model, the same task is applied, and it is required to fill in the masked tokens. However, masked tokens are selected randomly from a sentence.
In BERT4Rec, authors used Cloze task technique (also known as “Masked Language Model) to train the bi-directional model. In this, we randomly mask some items (i.e., replace them with a special token [mask]) in the input sequences, and then predict the ids of those masked items based on their surrounding context.
$$\begin{align} Input: [v_1, v_2, v_3, v_4, v_5] \xrightarrow{\text{randomly mask}} [v_1, [mask]_1, v_3, [mask]_2, v_5]\\ Labels: [mask]_1 = v_2, [mask]_2 = v_4 \end{align}
$$
Let's take another example:
In Autumn the ______ fall from the trees.
Do you know the answer? Most likely you do, and you do because you have considered the context of the sentence.
We see the words *fall* and *trees* — we know that the missing word is something that *falls from trees*.
A lot of things fall from trees, acorns, branches, leaves — but we have another condition, *in Autumn* — that narrows down our search, the most probable thing to fall from a tree in Autumn are *leaves*.
As humans, we use a mix of general world knowledge, and linguistic understanding to come to that conclusion. For BERT, this guess will come from reading *a lot* — and learning linguistic patterns incredibly well.
BERT may not know what Autumn, trees, and leaves are — but it does know that given linguistic patterns, and the context of these words, the answer is most likely to be *leaves*.
The outcome of this process — for BERT — is an improved comprehension of the style of language being used.
## Causal language modeling
Causal language modeling is the task of predicting the token following a sequence of tokens. In this situation, the model only attends to the left context (tokens on the left of the mask). Such a training is particularly interesting for generation tasks.
:::{admonition} Note
:class: tip
I think it's because pre-BERT, causal language modeling was actually just called language modeling. When the BERT paper arrived they coined the task of predicting random masked tokens as masked language modeling, which led to subsequent papers presenting transformer-like models for translation or generation to use the term causal language modeling for clarity. ~ [https://www.reddit.com/user/keramitas/](https://www.reddit.com/user/keramitas/).
:::
## Permutation language modeling
PLM is the idea of capturing bidirectional context by training an autoregressive model on all possible permutation of words in a sentence. Instead of fixed left-right or right-left modeling, XLNET maximizes expected log likelihood over all possible permutations of the sequence. In expectation, each position learns to utilize contextual information from all positions thereby capturing bidirectional context. No [MASK] is needed and input data need not be corrupted.
<p><center><img src='_images/US780867_6.png'></center><p>
The above diagram illustrates PLM. Let us consider that we are learning x3 (the token at the 3rd position in the sentence). PLM trains an autoregressive model with various permutations of the tokens in the sentence, so that in the end of all such permutations, we would have learnt x3, given all other words in the sentence. In the above illustration, we can see that the next layer takes as inputs only the tokens preceding x3 in the permutation sequence. This way, autoregression is also achieved.
| github_jupyter |
# Operazioni CRUD con Cassandra
Cassandra si installa dalla distribuzione [Datastax](https://downloads.datastax.com/#ddac) che consente di scaricare un archivio da posizionare dove si vuole nel percorso delle cartelle. In alternativa si può scaricare Cassandra direttamente da [Apache.org](https://cassandra.apache.org/download/). Su MacOS utilizzeremo direttamente `brew`. Una volta aggiunto il percorso ```/path/to/cassandra/bin``` alla propria variabile di ambiente ```PATH``` si può accedere all'istanza del server digitando semplicemente ```cassandra```. Dopo l'avvio del server si potrà accedere alla shell interattiva digitando ```cqlsh```.
## Comandi di shell
Una volta connessi al server attraverso ```cqlsh``` si possono usare diversi comandi di shell:
```
DESCRIBE -- descrive lo stato generale del sistema piuttosto che singoli keyspace, tabelle o colonne
SHOW -- mostra informazioni su Cassandra e sul host cui si è connessi
CLEAR -- pulisce la shell
CONSISTENCY -- riporta o imposta il livello di consistenza
SOURCE 'file' -- esegue un file di comandi CQL
COPY table TO 'file' -- copia una tabella su un file csv
COPY table FROM 'file' -- importa una tabella esistente da un file csv
EXIT -- uscita
LOGIN username [pass] -- login come utente registrato
```
## Comandi CQL
Il corrispondente di un database è un keyspace che va creato ed utilizzato prima di creare le column family.
```
CREATE KEYSPACE [IF NOT EXISTS] keyspace_name
WITH REPLICATION = {
'class' : 'SimpleStrategy', 'replication_factor' : N
| 'class' : 'NetworkTopologyStrategy',
'dc1_name' : N [, ...]
}
[AND DURABLE_WRITES = true|false] ;
USE keyspace_name;
```
La creazione di una column family richiede la specifica del [tipo CQL](https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html) delle varie colonne.
```
CREATE TABLE [IF NOT EXISTS] [keyspace_name.]table_name (
column_definition [, ...]
PRIMARY KEY (column_name [, column_name ...])
[WITH table_options
| CLUSTERING ORDER BY (clustering_column_name order])
| ID = 'table_hash_tag'
| COMPACT STORAGE]
```
Creiamo il keyspace del Titanic con la tabella passeggeri.
```sql
CREATE KEYSPACE titanic WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
USE titanic;
CREATE TABLE passengers (PassengerId int PRIMARY KEY, Survived int, Pclass int ,
Name text , Sex text , Age float , SibSp int, Parch int, Ticket text,
Fare float, Cabin text, Embarked text);
COPY passengers (passengerid , survived , pclass , name , sex , age , sibsp , parch , ticket , fare , cabin , embarked )
FROM './Data/titanic.csv'
WITH SKIPROWS = 1;
```
Le query sulle tabelle si effettuano con il costrutto ```SELECT``` e si possono creare indici su colonna per le clausole ```WHERE```.
```
CREATE INDEX IF NOT EXISTS index_name
ON keyspace_name.table_name ( KEYS ( column_name ) )
SELECT * | select_expression | DISTINCT partition
FROM [keyspace_name.] table_name
[WHERE partition_value
[AND clustering_filters
[AND static_filters]]]
[ORDER BY PK_column_name ASC|DESC]
[LIMIT N]
[ALLOW FILTERING]
```
Selezioniamo tutte le righe.
Contiamo i passeggeri di terza classe.
Selezioniamo nome e cabina dei passeggeri di prima e seconda classe.
```sql
CREATE INDEX IF NOT EXISTS class ON passengers (pclass);
SELECT COUNT(*) FROM passengers WHERE pclass = 3;
SELECT name, cabin FROM passengers WHERE pclass < 3 ALLOW FILTERING ;
```
L'alterazione si esegue con ```ALTER```.
```
ALTER TABLE [keyspace_name.] table_name
[ALTER column_name TYPE cql_type]
[ADD (column_definition_list)]
[DROP column_list | COMPACT STORAGE ]
[RENAME column_name TO column_name]
[WITH table_properties];
ALTER KEYSPACE keyspace_name
WITH REPLICATION = {
'class' : 'SimpleStrategy', 'replication_factor' : N
| 'class' : 'NetworkTopologyStrategy', 'dc1_name' : N [, ...]
}
[AND DURABLE_WRITES = true|false] ;
```
Eliminiamo una colonna:
```sql
ALTER TABLE passengers DROP sibsp;
```
L'inserimento dati si esegue con ```INSERT```.
```
INSERT INTO [keyspace_name.] table_name (column_list)
VALUES (column_values)
[IF NOT EXISTS]
[USING TTL seconds | TIMESTAMP epoch_in_microseconds]
```
La cancellazione si esegue con ```DELETE```.
```
DELETE [column_name (term)][, ...]
FROM [keyspace_name.] table_name
[USING TIMESTAMP timestamp_value]
WHERE PK_column_conditions
[IF EXISTS | IF static_column_conditions]
```
Inseriamo un nuovo passeggero e poi rimuoviamolo.
```sql
INSERT INTO passengers (passengerid , age , cabin , embarked , fare , name , parch , pclass , sex , survived , ticket )
VALUES ( 892, 45.0, 'Z76', 'S', 456.789, 'John Smith', 2, 1, 'male', 1, 'TicketOne');
DELETE name, sex FROM passengers WHERE passengerid = 891;
SELECT * FROM passengers WHERE passengerid = 891; # i valori cancellati sono nulli
DELETE FROM passengers WHERE passengerid = 891;
```
L'aggiornamento si effettua con ```UPDATE```.
```
UPDATE [keyspace_name.] table_name
[USING TTL time_value | USING TIMESTAMP timestamp_value]
SET assignment [, assignment] . . .
WHERE row_specification
[IF EXISTS | IF condition [AND condition] . . .] ;
```
Modifichiamo il tipo di biglietto del passeggero n. 892.
```sql
UPDATE passengers SET ticket = 'VivaTicket' WHERE passengerid = 892;
```
La rimozione si esegue con il comando ```DROP```.
```
DROP TABLE [IF EXISTS] keyspace_name.table_name
DROP KEYSPACE [IF EXISTS] keyspace_name
DROP INDEX [IF EXISTS] [keyspace.]index_name
```
Rimuoviamo la tabella dei passeggeri
```sql
DROP TABLE passengers;
```
## Aggregazione
E' possibile creare funzioni e aggregazioni definite dall'utente usando codice Java o Javascript, ma ci sono già disponibili ```MAX, MIN, COUNT, AVG```. Per gestire al meglio le query complesse con aggregazioni e raggruppamenti bisogna creare le giuste chiavi primarie che creano la cosiddetta _partition key_.
Al fine di rendere efficienti le query di ricerca e aggregazione, è bene creare una tabella con chiave primaria composita con una struttura che ricalchi il seguente esempio.
```sql
CREATE TABLE example (
partitionKey1 text,
partitionKey2 text,
clusterKey1 text,
clusterKey2 text,
normalField1 text,
normalField2 text,
PRIMARY KEY (
(partitionKey1, partitionKey2),
clusterKey1, clusterKey2
)
);
```
Questa chiave è composita e contiene, nella prima parte, tutti e soli i campi che serviranno nelle clausole ```WHERE``` e ```GROUP BY``` della query (unici consentiti per altro).
I campi che fanno parte di quella porzione della chiave primaria che è utilizzata per il clustering delle righe, sono quelli che effettivamente distinguono le singole righe e quindi consentono di raggruppare più righe distinte per lo stesso valore dei campi della partition key. Tali righe costituiscono la base per eseguire efficientemente le aggregazioni.
Ricreiamo la struttura della tabella dei passeggeri e calcoliamo il numero e l'età media dei passeggeri sopravvissuti per classe e per genere.
```sql
CREATE TABLE passengers (
survived int,
class int,
sex text,
id int,
age float,
cabin text,
embarked text,
fare float,
name text,
parch int,
sibsp int,
ticket text,
PRIMARY KEY ((survived, class, sex), id)
) WITH CLUSTERING ORDER BY (id ASC);
COPY passengers (id , survived , class , name , sex , age , sibsp , parch , ticket , fare , cabin , embarked ) FROM './src/github repositories/Big-Data/Data/titanic.csv' WITH SKIPROWS = 1;
SELECT DISTINCT survived, class, sex FROM passengers ;
SELECT COUNT(*) AS num_of_pass, AVG(age) as average_age, sex, class FROM passengers WHERE survived=1 AND class < 4 GROUP BY class, sex ALLOW FILTERING;
```
## Utilizzo del driver Python per Cassandra
Il driver può essere installato usando ```conda```:
```sh
$ conda install -c conda-forge cassandra-driver
```
ovvero `pip`:
```sh
$ pip install cassandra-driver
```
```
"""
import cassandra.cluster as ccl
import cassandra.policies as pol
# connessione ad un cluster composto da tre nodi,
# in ascolto sulla porta 9042, con una determinata
# politica di bilanciamento del carico di tipo round-robin
cluster = ccl.Cluster(
['10.1.1.3', '10.1.1.4', '10.1.1.5'],
load_balancing_policy=pol.DCAwareRoundRobinPolicy(local_dc='US_EAST'),
port=9042)
"""
import cassandra.cluster as ccl
# connessione al cluster di default
cluster = ccl.Cluster()
session = cluster.connect()
# creiamo ed usiamo il keyspace
session.execute("CREATE KEYSPACE titanic WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3}")
session.set_keyspace('titanic')
# session.execute('USE titanic')
# Creiamo la tabella
query = "CREATE TABLE titanic.passengers (\
survived int,\
class int,\
sex text,\
id int,\
age float,\
cabin text,\
embarked text,\
fare float,\
name text,\
parch int,\
sibsp int,\
ticket text,\
PRIMARY KEY ((survived, class, sex), id)\
) WITH CLUSTERING ORDER BY (id ASC)"
session.execute(query)
# importiamo i dati dal dataset csv
# COPY è un comando di cqlsh e non un comando CQL
# quindi lo eseguiremo invocando cqlsh e passando
# il comando da eseguire
import os
os.system("cqlsh -k titanic -e \"COPY passengers (id , survived , class , name , sex , age , sibsp , parch , ticket , fare , cabin , embarked ) FROM 'Data/titanic.csv' WITH SKIPROWS = 1\"")
# Eseguiamo una query di insert
session.execute("""
INSERT INTO titanic.passengers (
survived,
class,
sex,
id,
age,
cabin,
embarked,
fare,
name,
parch,
sibsp,
ticket)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
""",
( 1, 1, 'male', 892, 45.0, 'Z76', 'S', 456.789, 'John Smith', 2, 1, 'TicketOne'))
# Eseguiamo una query di selezione
rows = session.execute('SELECT name, age, cabin, fare FROM titanic.passengers WHERE id > 880 ALLOW FILTERING')
for row in rows:
print(row.name, row.age, row.cabin, row.fare, sep='\n', end='\n-------------\n')
# Eseguiamo una query invocando esplicitamente la classe SimpleStatement che
# definisce i comandi CQL nella API del driver
import cassandra
query = cassandra.query.SimpleStatement(
"INSERT INTO titanic.passengers (survived, class, sex, id, name, age) VALUES (%s, %s, %s, %s, %s, %s)",
consistency_level=cassandra.ConsistencyLevel.ONE) # pleonastico perché c'è un solo nodo nel nostro cluster
session.execute(query, (0, 3, 'female', 893, 'Mrs. Melania De Avilland', 42))
# E' possibile preparare le query ed eseguirle parametricamente
# con la classe PreparedStatement per la quale si può impostare il livello di consistenza
survived_per_class_stmt = session.prepare("SELECT * FROM titanic.passengers WHERE survived = 1 AND class = ? ALLOW FILTERING")
survived_per_class_stmt.consistency_level = cassandra.ConsistencyLevel.ONE
survived = []
for pclass in [1,2,3]:
passengers = session.execute(survived_per_class_stmt, [pclass])
survived.append(passengers)
for passenger in survived[0]:
print(passenger.name,passenger.sex,sep='\n',end='\n---------------\n')
```
### Esecuzione asincrona
L'esecuzione asincrona consente di inviare la query continuare l'esecuzione e poi raccogliere i risultati. Si possono anche definire delle callback per la gestione del successo e del fallimento della query.
Nel seguente esempio si gestiscono delle query concorrenti.
```python
# Questa è l'eccezione di timeout delle query
from cassandra import ReadTimeout
# lista dei risultati futuri
futures = []
query = "SELECT * FROM users WHERE user_id=%s"
for user_id in ids_to_fetch:
futures.append(session.execute_async(query, [user_id])
# le query vengono eseguite in modo asincrono e concorrente
# e questo codice attende che siano finite
for future in futures:
rows = future.result()
print rows[0].name
```
Esempio di definizione delle funzioni di callback.
```python
# definiamo le due callback
def handle_success(rows):
user = rows[0]
try:
process_user(user.name, user.age, user.id)
except Exception:
log.error("Failed to process user %s", user.id)
def handle_error(exception):
log.error("Failed to fetch user info: %s", exception)
# lanciamo la query asincrona e agganciamo le due funzioni di callback
future = session.execute_async(query)
future.add_callbacks(handle_success, handle_error)
```
### Utilizzo degli Object Mapper
```
# Effettuiamo il drop della tabella
session.execute('DROP TABLE titanic.passengers')
# La API per generare gli Object Mapper
# si trova nel modulo cassandra.cqlengine
import cassandra
from cassandra.cqlengine import columns
from cassandra.cqlengine import connection
from datetime import datetime
from cassandra.cqlengine.management import sync_table
from cassandra.cqlengine.models import Model
# setup della connessione
connection.setup(['127.0.0.1'],'titanic',consistency=cassandra.ConsistencyLevel.ONE)
# Creazione del modello dei dati,
# cioè la tabella, come classe derivata da Model
class Passengers(Model):
survived = columns.Integer(primary_key=True,partition_key=True)
pclass = columns.Integer(primary_key=True,partition_key=True)
sex = columns.Text(primary_key=True,partition_key=True)
pid = columns.Integer(primary_key=True, clustering_order='asc')
age = columns.Float()
cabin = columns.Text()
embarked = columns.Text()
fare = columns.Float()
name = columns.Text()
parch = columns.Integer()
sibsp = columns.Integer()
ticket = columns.Text()
# Viene create una tabella dal nome uguale al modello
# all'interno del keyspace di default della connessione
sync_table(Passengers)
# importiamo i dati richiamando cqlsh da sistema
import os
os.system("cqlsh -k titanic -e \"COPY passengers (pid , survived , pclass , name , sex , age , sibsp , parch , ticket , fare , cabin , embarked ) FROM 'Data/titanic.csv' WITH SKIPROWS = 1\"")
# eseguiamo operazioni di insert
Passengers.create(survived=1,pclass=1,sex='male',pid=892,age=45.0,cabin='Z76',embarked='S',fare=123.456,name='John Smith',parch=2,sibsp=1,ticket='TiketOne')
# manipoliamo gli oggetti della tabella attraverso la classe model
# che consente di fare query e update
Passengers.objects.count()
# l'operazione di update dev'essere fatta selezionando l'intera chiave primaria
Passengers.objects(survived=1,pclass=1,sex='male',pid=892).update(ticket='VivaTicket')
q = Passengers.objects.filter(survived=1,pclass__in=[1,2],sex='female')
for passenger in q:
print(passenger.name,passenger.age,sep='\n',end='\n---------------\n')
# non funziona perché bisogna etichettare cabin come indice
# al momento della creazione della tabella
os.system("cqlsh -k titanic -e \"CREATE INDEX cabin_idx ON passengers(cabin)\"")
q = Passengers.objects.filter(survived=1,pclass__in=[1,2],sex='female',cabin__like='C%')
for passenger in q:
print(passenger.name,passenger.age,sep='\n',end='\n---------------\n')
# aggregazione con valori medi -- fuori dal driver
# tasso di sopravvivenza
import numpy as np
import pandas as pd
data_set=[]
females_1_class = Passengers.objects(survived=1,pclass=1,sex='female')
females_embarked_1_class = Passengers.objects.filter(pclass=1,sex='female')
males_1_class = Passengers.objects(survived=1,pclass=1,sex='male')
males_embarked_1_class = Passengers.objects.filter(pclass=1,sex='male')
females_2_class = Passengers.objects(survived=1,pclass=2,sex='female')
females_embarked_2_class = Passengers.objects.filter(pclass=2,sex='female')
males_2_class = Passengers.objects(survived=1,pclass=2,sex='male')
males_embarked_2_class = Passengers.objects.filter(pclass=2,sex='male')
females_3_class = Passengers.objects(survived=1,pclass=3,sex='female')
females_embarked_3_class = Passengers.objects.filter(pclass=3,sex='female')
males_3_class = Passengers.objects(survived=1,pclass=3,sex='male')
males_embarked_3_class = Passengers.objects.filter(pclass=3,sex='male')
def avg_age(query):
age_array = np.array([item.age for item in query])
return age_array[age_array != None].mean()
data_set = [\
{'class': 1,\
'female_%': females_1_class.count()/females_embarked_1_class.allow_filtering().count(),\
'f_avg_age': avg_age(females_1_class),\
'male_%': males_1_class.count()/males_embarked_1_class.allow_filtering().count(),\
'm_avg_age': avg_age(males_1_class)},\
{'class': 2,\
'female_%': females_2_class.count()/females_embarked_2_class.allow_filtering().count(),\
'f_avg_age': avg_age(females_2_class),\
'male_%': males_2_class.count()/males_embarked_2_class.allow_filtering().count(),\
'm_avg_age': avg_age(males_2_class)},\
{'class': 3,\
'female_%': females_3_class.count()/females_embarked_3_class.allow_filtering().count(),\
'f_avg_age': avg_age(females_3_class),\
'male_%': males_3_class.count()/males_embarked_3_class.allow_filtering().count(),\
'm_avg_age': avg_age(males_3_class)}\
]
dataframe = pd.DataFrame(data_set)
dataframe.set_index('class')
```
| github_jupyter |
# Dataset D1 - WGS E.coli
## Importing libraries
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as tkr
```
## Data reading and cleaning
```
data = pd.read_csv('../summary_data/D1_WGS_E.coli_summary.csv')
data['total_corrections'] = data['Base - TP']+ data['Base - FP']
```
## Defining color dictionary
```
color_dict=dict({'Bfc':'purple','Bless':'orange','Coral':'brown','Fiona':'gray','Lighter':'pink','Musket':'blue','Pollux':'yellow','Racer':'green','Reckoner':'red','Sga':'black'})
```
## Selecting best kmer for each tool
```
data_best = data.loc[data.groupby(["Tool","Coverage"])["Base Gain"].idxmax()]
```
<br>
<br>
# Figure 2g
Heatmap depicting the gain across various coverage settings.<br>
Each row corresponds to an error correction tool, and each column corresponds to a dataset with a given coverage.
<br>For each tool, the best k-mer size was selected.
```
result = data_best.pivot(index='Tool', columns='Coverage', values='Base Gain') \
.sort_values(by="Tool", ascending=False)
sns.set_style("white")
sns.set_context("talk")
g=sns.heatmap(result,
annot=True,
cmap='coolwarm',
center=0,
linewidths=.5,
annot_kws={'size':17},
fmt=".2f",
vmin=-1,
vmax=1)
colorbar = g.collections[0].colorbar
colorbar.set_ticks([-1.0, -0.5, 0, 0.5, 1.0])
g.set(xlabel='Coverage', ylabel='Gain')
g.set_ylim(0, 10)
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2g_heatmap_ecoli_gain.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2g_heatmap_ecoli_gain.pdf", bbox_inches="tight")
```
<br>
<br>
# Figure 2h
Heatmap depicting the precision across various coverage settings. <br>
Each row corresponds to an error correction tool, and each column corresponds to a dataset with a given coverage.
<br>For each tool, the best k-mer size was selected.
```
result = data_best.pivot(index='Tool', columns='Coverage', values='Base Precision') \
.sort_values(by="Tool", ascending=False)
sns.set_style("white")
sns.set_context("talk")
g=sns.heatmap(result,
annot=True,
cmap='coolwarm',
center=0,
linewidths=.5,
annot_kws={'size':17},
fmt=".2f",
vmax=1.0)
colorbar.set_ticks([0, 0.2, 0.4, 0.6, 0.8, 1.0])
g.set(xlabel='Coverage', ylabel='Precision')
g.set_ylim(0, 10)
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2h_heatmap_ecoli_precision.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2h_heatmap_ecoli_precision.pdf", bbox_inches="tight")
```
<br>
<br>
# Figure 2i
Heatmap depicting the sensitivity across various coverage settings.<br>
Each row corresponds to an error correction tool, and each column corresponds to a dataset with a given coverage.
<br>For each tool, the best k-mer size was selected.
```
result = data_best.pivot(index='Tool', columns='Coverage', values='Base Sensitivity')\
.sort_values(by="Tool", ascending=False)
sns.set_style("white")
sns.set_context("talk")
g=sns.heatmap(result,
annot=True,
cmap='coolwarm',
center=0,
linewidths=.5,
annot_kws={'size':17},
fmt=".2f",
vmax=1.0)
g.set(xlabel='Coverage', ylabel='Sensitivity')
g.set_ylim(0, 10)
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2i_heatmap_ecoli_sensitivity.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2i_heatmap_ecoli_sensitivity.pdf", bbox_inches="tight")
```
<br>
<br>
# Figure 2j
Scatter plot depicting the number of TP corrections (x-axis) and FP corrections (y-axis) for datasets with 32x coverage.
<br>For each tool, the best k-mer size was selected.
```
sns.set_style("white")
sns.set_context("talk")
g=sns.lmplot(data=data_best[(data_best['Coverage'] == 32)]
, x='Base - TP',
y='Base - FP',
hue='Tool',
fit_reg=False,
aspect=1.3,
scatter_kws={"s": 200},
palette=color_dict)
for ax in g.axes.flatten():
ax.set_xticklabels(ax.get_xticklabels(), fontsize=17)
ax.xaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: "{:.0f}k".format(x/1000)))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: "{:.0f}k".format(x/1000)))
ax.yaxis.set_major_locator(tkr.MultipleLocator(10000))
ax.xaxis.set_major_locator(tkr.MultipleLocator(8000))
g.set(xlabel='TP', ylabel='FP')
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2j_TP_vs_FP.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2j_TP_vs_FP.pdf", bbox_inches="tight")
```
<br>
<br>
# Figure 2k
Scatter plot depicting the number of FP corrections (x-axis) and FN corrections (y-axis) for datasets with 32x
coverage.
<br>For each tool, the best k-mer size was selected.
```
sns.set_style("white")
sns.set_context("talk")
g=sns.lmplot(data=data_best[(data_best['Coverage'] == 32)],
x='Base - FP',
y='Base - FN',
hue='Tool',
fit_reg=False,
aspect=1.3,
scatter_kws={"s": 200},
palette=color_dict)
for ax in g.axes.flatten():
ax.set_xticklabels(ax.get_xticklabels(), fontsize=17)
ax.xaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: "{:.0f}k".format(x/1000)))
ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: "{:.0f}k".format(x/1000)))
ax.yaxis.set_major_locator(tkr.MultipleLocator(7500))
ax.xaxis.set_major_locator(tkr.MultipleLocator(10000))
g.set(xlabel='FP', ylabel='FN')
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2k_FP_vs_FN.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2k_FP_vs_FN.pdf", bbox_inches="tight")
```
<br>
<br>
# Figure 2l
Scatter plot depicting the sensitivity (x-axis) and precision (y-axis) for datasets with 32x coverage.
<br>For each tool, the best k-mer size was selected.
```
sns.set_style("white")
sns.set_context("talk")
g=sns.lmplot(data=data_best[(data_best['Coverage'] == 32)],
x='Base Precision',
y='Base Sensitivity',
hue='Tool',
fit_reg=False,
aspect=1.3,
scatter_kws={"s": 200},
palette=color_dict)
g.set(xlabel='Precision', ylabel='Sensitivity')
plt.ylim(-0.05, 1.05)
plt.xlim(-0.05, 1.05)
sns.despine()
plt.savefig("../figures/D1_WGS_E.coli/Fig2l_Precision_vs_Sensitivity.png", bbox_inches="tight")
plt.savefig("../figures/D1_WGS_E.coli/Fig2l_Precision_vs_Sensitivity.pdf", bbox_inches="tight")
```
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
#basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
(length, height, depth) = image.shape[0], image.shape[1], image.shape[2]
tot_size = length * height * depth
v = image.reshape((tot_size, 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.square(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
# Introduction to Brian part 3: Simulations
If you haven’t yet read parts 1 and 2 on Neurons and Synapses, go read them first.
This tutorial is about managing the slightly more complicated tasks that crop up in research problems, rather than the toy examples we've been looking at so far. So we cover things like inputting sensory data, modelling experimental conditions, etc.
As before we start by importing the Brian package and setting up matplotlib for IPython:
```
from brian2 import *
%matplotlib inline
```
## Multiple runs
Let's start by looking at a very common task: doing multiple runs of a simulation with some parameter that changes. Let's start off with something very simple, how does the firing rate of a leaky integrate-and-fire neuron driven by Poisson spiking neurons change depending on its membrane time constant? Let's set that up.
```
# remember, this is here for running separate simulations in the same notebook
start_scope()
# Parameters
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
# Range of time constants
tau_range = linspace(1, 10, 30)*ms
# Use this list to store output rates
output_rates = []
# Iterate over range of time constants
for tau in tau_range:
# Construct the network each time
P = PoissonGroup(num_inputs, rates=input_rate)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Run it and store the output firing rate in the list
run(1*second)
output_rates.append(M.num_spikes/second)
# And plot it
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
Now if you're running the notebook, you'll see that this was a little slow to run. The reason is that for each loop, you're recreating the objects from scratch. We can improve that by setting up the network just once. We store a copy of the state of the network before the loop, and restore it at the beginning of each iteration.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
output_rates = []
# Construct the network just once
P = PoissonGroup(num_inputs, rates=input_rate)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Store the current state of the network
store()
for tau in tau_range:
# Restore the original state of the network
restore()
# Run it with the new value of tau
run(1*second)
output_rates.append(M.num_spikes/second)
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
That's a very simple example of using store and restore, but you can use it in much more complicated situations. For example, you might want to run a long training run, and then run multiple test runs afterwards. Simply put a store after the long training run, and a restore before each testing run.
You can also see that the output curve is very noisy and doesn't increase monotonically like we'd expect. The noise is coming from the fact that we run the Poisson group afresh each time. If we only wanted to see the effect of the time constant, we could make sure that the spikes were the same each time (although note that really, you ought to do multiple runs and take an average). We do this by running just the Poisson group once, recording its spikes, and then creating a new `SpikeGeneratorGroup` that will output those recorded spikes each time.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
output_rates = []
# Construct the Poisson spikes just once
P = PoissonGroup(num_inputs, rates=input_rate)
MP = SpikeMonitor(P)
# We use a Network object because later on we don't
# want to include these objects
net = Network(P, MP)
net.run(1*second)
# And keep a copy of those spikes
spikes_i = MP.i
spikes_t = MP.t
# Now construct the network that we run each time
# SpikeGeneratorGroup gets the spikes that we created before
SGG = SpikeGeneratorGroup(num_inputs, spikes_i, spikes_t)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(SGG, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Store the current state of the network
net = Network(SGG, G, S, M)
net.store()
for tau in tau_range:
# Restore the original state of the network
net.restore()
# Run it with the new value of tau
net.run(1*second)
output_rates.append(M.num_spikes/second)
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
You can see that now there is much less noise and it increases monotonically because the input spikes are the same each time, meaning we're seeing the effect of the time constant, not the random spikes.
Note that in the code above, we created `Network` objects. The reason is that in the loop, if we just called `run` it would try to simulate all the objects, including the Poisson neurons ``P``, and we only want to run that once. We use `Network` to specify explicitly which objects we want to include.
The techniques we've looked at so far are the conceptually most simple way to do multiple runs, but not always the most efficient. Since there's only a single output neuron in the model above, we can simply duplicate that output neuron and make the time constant a parameter of the group.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
num_tau = len(tau_range)
P = PoissonGroup(num_inputs, rates=input_rate)
# We make tau a parameter of the group
eqs = '''
dv/dt = -v/tau : 1
tau : second
'''
# And we have num_tau output neurons, each with a different tau
G = NeuronGroup(num_tau, eqs, threshold='v>1', reset='v=0', method='exact')
G.tau = tau_range
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Now we can just run once with no loop
run(1*second)
output_rates = M.count/second # firing rate is count/duration
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
You can see that this is much faster again! It's a little bit more complicated conceptually, and it's not always possible to do this trick, but it can be much more efficient if it's possible.
Let's finish with this example by having a quick look at how the mean and standard deviation of the interspike intervals depends on the time constant.
```
trains = M.spike_trains()
isi_mu = full(num_tau, nan)*second
isi_std = full(num_tau, nan)*second
for idx in range(num_tau):
train = diff(trains[idx])
if len(train)>1:
isi_mu[idx] = mean(train)
isi_std[idx] = std(train)
errorbar(tau_range/ms, isi_mu/ms, yerr=isi_std/ms)
xlabel(r'$\tau$ (ms)')
ylabel('Interspike interval (ms)');
```
Notice that we used the ``spike_trains()`` method of `SpikeMonitor`. This is a dictionary with keys being the indices of the neurons and values being the array of spike times for that neuron.
## Changing things during a run
Imagine an experiment where you inject current into a neuron, and change the amplitude randomly every 10 ms. Let's see if we can model that using a Hodgkin-Huxley type neuron.
```
start_scope()
# Parameters
area = 20000*umetre**2
Cm = 1*ufarad*cm**-2 * area
gl = 5e-5*siemens*cm**-2 * area
El = -65*mV
EK = -90*mV
ENa = 50*mV
g_na = 100*msiemens*cm**-2 * area
g_kd = 30*msiemens*cm**-2 * area
VT = -63*mV
# The model
eqs_HH = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/Cm : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp
'''
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
figure(figsize=(9, 4))
for l in range(5):
group.I = rand()*50*nA
run(10*ms)
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
In the code above, we used a loop over multiple runs to achieve this. That's fine, but it's not the most efficient way to do it because each time we call ``run`` we have to do a lot of initialisation work that slows everything down. It also won't work as well with the more efficient standalone mode of Brian. Here's another way.
```
start_scope()
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
# we replace the loop with a run_regularly
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
# we keep the loop just to draw the vertical lines
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
We've replaced the loop that had multiple ``run`` calls with a ``run_regularly``. This makes the specified block of code run every ``dt=10*ms``. The ``run_regularly`` lets you run code specific to a single `NeuronGroup`, but sometimes you might need more flexibility. For this, you can use `network_operation` which lets you run arbitrary Python code (but won't work with the standalone mode).
```
start_scope()
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
# we replace the loop with a network_operation
@network_operation(dt=10*ms)
def change_I():
group.I = rand()*50*nA
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
Now let's extend this example to run on multiple neurons, each with a different capacitance to see how that affects the behaviour of the cell.
```
start_scope()
N = 3
eqs_HH_2 = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/C : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp
C : farad
'''
group = NeuronGroup(N, eqs_HH_2,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
# initialise with some different capacitances
group.C = array([0.8, 1, 1.2])*ufarad*cm**-2*area
statemon = StateMonitor(group, variables=True, record=True)
# we go back to run_regularly
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v.T/mV, '-')
xlabel('Time (ms)')
ylabel('v (mV)');
```
So that runs, but something looks wrong! The injected currents look like they're different for all the different neurons! Let's check:
```
plot(statemon.t/ms, statemon.I.T/nA, '-')
xlabel('Time (ms)')
ylabel('I (nA)');
```
Sure enough, it's different each time. But why? We wrote ``group.run_regularly('I = rand()*50*nA', dt=10*ms)`` which seems like it should give the same value of I for each neuron. But, like threshold and reset statements, ``run_regularly`` code is interpreted as being run separately for each neuron, and because I is a parameter, it can be different for each neuron. We can fix this by making I into a *shared* variable, meaning it has the same value for each neuron.
```
start_scope()
N = 3
eqs_HH_3 = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/C : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp (shared) # everything is the same except we've added this shared
C : farad
'''
group = NeuronGroup(N, eqs_HH_3,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
group.C = array([0.8, 1, 1.2])*ufarad*cm**-2*area
statemon = StateMonitor(group, 'v', record=True)
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v.T/mV, '-')
xlabel('Time (ms)')
ylabel('v (mV)');
```
Ahh, that's more like it!
## Adding input
Now let's think about a neuron being driven by a sinusoidal input. Let's go back to a leaky integrate-and-fire to simplify the equations a bit.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
eqs = '''
dv/dt = (I-v)/tau : 1
I = A*sin(2*pi*f*t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='euler')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
So far, so good and the sort of thing we saw in the first tutorial. Now, what if that input current were something we had recorded and saved in a file? In that case, we can use `TimedArray`. Let's start by reproducing the picture above but using `TimedArray`.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
# Create a TimedArray and set the equations to use it
t_recorded = arange(int(200*ms/defaultclock.dt))*defaultclock.dt
I_recorded = TimedArray(A*sin(2*pi*f*t_recorded), dt=defaultclock.dt)
eqs = '''
dv/dt = (I-v)/tau : 1
I = I_recorded(t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
Note that for the example where we put the ``sin`` function directly in the equations, we had to use the ``method='euler'`` argument because the exact integrator wouldn't work here (try it!). However, ``TimedArray`` is considered to be constant over its time step and so the linear integrator can be used. This means you won't get the same behaviour from these two methods for two reasons. Firstly, the numerical integration methods ``exact`` and ``euler`` give slightly different results. Secondly, ``sin`` is not constant over a timestep whereas ``TimedArray`` is.
Now just to show that ``TimedArray`` works for arbitrary currents, let's make a weird "recorded" current and run it on that.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
# Let's create an array that couldn't be
# reproduced with a formula
num_samples = int(200*ms/defaultclock.dt)
I_arr = zeros(num_samples)
for _ in range(100):
a = randint(num_samples)
I_arr[a:a+100] = rand()
I_recorded = TimedArray(A*I_arr, dt=defaultclock.dt)
eqs = '''
dv/dt = (I-v)/tau : 1
I = I_recorded(t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
Finally, let's finish on an example that actually reads in some data from a file. See if you can work out how this example works.
```
start_scope()
from matplotlib.image import imread
img = (1-imread('brian.png'))[::-1, :, 0].T
num_samples, N = img.shape
ta = TimedArray(img, dt=1*ms) # 228
A = 1.5
tau = 2*ms
eqs = '''
dv/dt = (A*ta(t, i)-v)/tau+0.8*xi*tau**-0.5 : 1
'''
G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='euler')
M = SpikeMonitor(G)
run(num_samples*ms)
plot(M.t/ms, M.i, '.k', ms=3)
xlim(0, num_samples)
ylim(0, N)
xlabel('Time (ms)')
ylabel('Neuron index');
```
| github_jupyter |
# Quantum states with high dimensional entanglement
This notebook allows visualizing the 20 circuits of the second pilot study with mention of their depth and gate repartition.
At the end, a toy protocol of ballot transmission is presented with experimental verification.
```
import numpy as np
import copy
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer, execute, transpile, assemble
from qiskit.tools.visualization import *
from qiskit.ignis.mitigation.measurement import (complete_meas_cal, tensored_meas_cal,
CompleteMeasFitter, TensoredMeasFitter)
import json
import time
from qiskit.tools.monitor import job_monitor
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
from c_utils import new_cut # circuit building utilities
from o_utils import ora
data_directory = "data2_files/" # this directory for 2d pilot project data
def json_dic_loader(dic_name):
f = open(data_directory+dic_name+'.json')
return json.load(f)
def json_dic_dumper(dic, dic_name):
with open(data_directory+dic_name+'.json', 'w') as f:
json.dump(dic,f)
```
## Set up the simulator and layout for 5 qubits
```
simulator = Aer.get_backend('qasm_simulator')
#specify the layout of the devices
used_qubits = 5
qubit_list = [0,1,2,3,4]
program_name="AL2" # This for a mix of W/Psi+ and W_bar/Phi+ separable states (2d pilot project)
Flag_char = "DS" # use the joint set
if len(Flag_char) >= 2:
unique_char = "M" # for "mixed"
else:
unique_char = Flag_char
```
###### OLD VERSION TO BE DELETED These dictionaries for the devices used in the study
QV_dic = {'ibmq_athens': 32.0, 'ibmq_valencia': 16.0, 'ibmq_ourense': 8.0,
"ibmqx2": 8.0, 'ibmq_santiago': 32.0, 'ibmq_vigo': 16.0, 'ideal_device': np.inf}
dev_dic = {'ibmq_santiago': "San",'ibmq_athens': "Ath", 'ibmq_valencia': "Val", 'ibmq_vigo': 'Vig','ibmq_ourense': "Our",
"ibmqx2": 'Yor', 'ideal_device': "Ide"}
```
# These dictionaries for the devices used in the study
if program_name == "QAD":
fidelity_dic = {'ibmq_athens': 0.925110, 'ibmq_valencia': 0.809101, 'ibmq_ourense': 0.802380,"ibmqx2": 0.627392,
'ibmq_santiago': 0.919399, 'ibmq_vigo': 0.908840, 'ibmq_lima':0.771835, 'ideal_device': 1.0}
data_directory = "data_files/"
elif program_name == "AL2":
fidelity_dic = {'ibmq_athens': 0.910145, 'ibmq_valencia': 0.794262, 'ibmq_ourense': 0.818974, "ibmqx2": 0.359528,
'ibmq_santiago': 0.900024, 'ibmq_vigo': 0.841831, 'ibmq_quito': 0.840260, 'ibmq_lima':0.771835,
'ibmq_belem':0.842281,'ideal_device': 1.0}
data_directory = "data2_files/"
QV_dic = {'ibmq_athens': 32.0, 'ibmq_valencia': 16.0, 'ibmq_ourense': 8.0,"ibmqx2": 8.0, 'ibmq_santiago': 32.0,
'ibmq_vigo': 16.0, 'ideal_device': np.inf, 'ibmq_quito': 16.0, 'ibmq_lima': "Lim",'ibmq_belem':16.0}
dev_dic = {'ibmq_santiago': "San",'ibmq_athens': "Ath", 'ibmq_valencia': "Val", 'ibmq_vigo': 'Vig','ibmq_ourense': "Our",
"ibmqx2": 'Yor', 'ibmq_quito': "Qui", 'ibmq_lima': "Lim", 'ibmq_belem': "Bel",'ideal_device': "Ide" }
# specify the device: here first the ideal noise-free device
project_device = 'ideal_device'
device_name = dev_dic[project_device]
# specify the nb of id gates between state creation and measurements
# zero for the ideal device of course
id_gates = 0
str_nb_id = str(id_gates)
zfilled = str_nb_id.zfill(4-len(str_nb_id))
# tail of the file names for RAM storage
mitig_name = program_name + "_" + device_name
project_name = mitig_name + "_" + unique_char + zfilled
print(mitig_name)
print(project_name)
# establish the result label list
# meas_calibs will be used for mitigation in the real device section
qr = QuantumRegister(used_qubits) #
meas_calibs, label_list = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')
nb_labels=len(label_list)
print(nb_labels,label_list)
len(meas_calibs)
# permutation list
# here it is simple to write down the list,
# but a version using itertools will be wellcome for >5 qubits projects
q_perm = [[0, 1, 2, 3, 4], [0, 1, 3, 2, 4], [0, 1, 4, 2, 3], [0, 2, 3, 1, 4], [0, 2, 4, 1, 3],
[0, 3, 4, 1, 2], [1, 2, 3, 0, 4], [1, 2, 4, 0, 3], [1, 3, 4, 0, 2], [2, 3, 4, 0, 1]]
```
## Create the quantum states
```
# define the two subsets of 10 separable states
if program_name == "QAD":
state_1a = ["W","Phi+"]
state_1b = ["GHZ","Psi+"]
elif program_name == "ALT" or "AL2":
state_1a = ["W","Psi+"]
state_1b = ["Wbar","Phi+"]
l_states = state_1a+state_1b
l_states
# version 20 circuits for demonstration
# (in the version run on real devices: two batches of 10 circuits)
# these circuits limited to state creation are ready to be saved
# for ultimately building circuits adapted to noisy simulator and real devices
# as option, these circuits will include a row of id gates between creation and measurements
circ_ori = []
for i_s in range(0,len(l_states),2):
for perm in q_perm:
mycircuit = QuantumCircuit(used_qubits, used_qubits)
mycircuit = new_cut.circuit_builder(mycircuit, perm, l_states[i_s],l_states[i_s+1])
circ_ori.append(mycircuit)
# add measurement section to the circuit set newly created:
nb_states = len(circ_ori)
circ_ideal = copy.deepcopy(circ_ori)
for i_state in range(nb_states):
new_cut.add_barrier_and_measure(circ_ideal[i_state],qubit_list)
```
## Obtain result distributions on noise free simulator
```
# execute on noise free simulator
s_sim = 12000
job_simul = execute(circ_ideal, backend=simulator, shots=s_sim)
tot_results_simul = job_simul.result()
# establish a dictionary of count results on noise free simulator:
# (this step is only useful if ram storage is performed)
void_counts = dict(zip(label_list, np.full(2**used_qubits,0.0))) #, dtype=int)))
tot_results_sim_dic = {}
ideal_dic = {}
for i_state in range(nb_states):
counts_simul = copy.deepcopy(void_counts)
counts_simul.update(tot_results_simul.get_counts(i_state))
ideal_dic[str(i_state)]=counts_simul
```
Example of circuit for separable state of the first type : $|W\rangle\otimes|\Psi^+\rangle$
```
i_state_test = 1
print(device_name, "circuit #",i_state_test)
circ_ideal[i_state_test].draw(output='mpl')
print(device_name, "circuit #",i_state_test)
plot_histogram(ideal_dic[str(i_state_test)],
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
Example of circuit for separable state of the second type : $|W\rangle^{\otimes X}\otimes|\Phi^+\rangle$
```
i_state_test = 11
print(device_name, "circuit #",i_state_test)
circ_ideal[i_state_test].draw(output='mpl')
print(device_name, "circuit #",i_state_test)
plot_histogram(ideal_dic[str(i_state_test)],
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
### Obtain the matrix of probability distribution of shape(nb_state,nb_labels) used by the classifier
```
def print_first_and_last_row(PDM):
print("first and last rows of the probability distribution matrix of dimension "+str(nb_states)+"x"+str(nb_labels))
print(np.round(PDM[0:1,:],4))
print(" ...")
print(np.round(PDM[-1:,:],4))
PD_ideal = np.ndarray((nb_states,nb_labels))
for i_state in range(nb_states):
PD_ideal[i_state, :] = list(ideal_dic[str(i_state)].values())
# now a little trick to get the ideal values from the simulator approximated values
with np.errstate(divide='ignore'): # ignore the divide by zero warning
PD_ideal = 1/np.round(s_sim/(PD_ideal))
# have a look at the matrix head and tail:
print_first_and_last_row(PD_ideal)
```
# Real device section
```
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
print(provider.backends())
project_device = 'ibmq_quito'# you may choice here a different backend
device_name = dev_dic[project_device]
mitig_name = program_name + "_" + device_name
print(mitig_name)
# determine here the backend
device = provider.get_backend(project_device) # the backend names are listed here above
properties = device.properties()
coupling_map = device.configuration().coupling_map
```
### Load circuits run on real device
```
id_gates = 0 # choice of 0 or 256 at this time
str_nb_id = str(id_gates)
zfilled = str_nb_id.zfill(4-len(str_nb_id))
project_name = mitig_name + "_" + unique_char + zfilled
print(project_name)
circuit_dic = json_dic_loader("circuit_"+ project_name)
real_circs = []
for i_state in list(range(nb_states)):
real_circs.append(QuantumCircuit().from_qasm_str(circuit_dic[str(i_state)]))
for i_state in range(20):
print(i_state,
"depth",real_circs[i_state].depth(),
"size", real_circs[i_state].size(),
"cx",real_circs[i_state].num_nonlocal_gates(),
json.dumps(real_circs[i_state].count_ops()))
i_state_test = 11 # choose here a perticular state to study
print(project_device, "circuit #",i_state_test,
"circuit depth:",real_circs[i_state_test].depth())
print('gates = ',real_circs[i_state_test].count_ops())
real_circs[i_state_test].draw(output='mpl')
```
## Histogram on simulator
```
job_simul = execute(real_circs[i_state_test], backend=simulator, shots=s_sim)
print(project_device, "circuit #",i_state_test, "on noise free simulator")
simul_results = job_simul.result().get_counts()
plot_histogram(simul_results,
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
# Results on real device
### Obtain mitigation filter
```
# retrieve the corresponding measurement mitigation filter obtained at experimental time
# use a fake job because the calibration results were stored as dictionary
simulator = Aer.get_backend('qasm_simulator')
fake_job_cal = execute(meas_calibs, backend=simulator, shots=1)
fake_cal_results = fake_job_cal.result()
cal_results_dic = json_dic_loader("cal_results_dic_"+mitig_name)
if 'date' in cal_results_dic.keys():
str(cal_results_dic['date'])
cal_results = fake_cal_results.from_dict(cal_results_dic)
meas_fitter = CompleteMeasFitter(cal_results, label_list, qubit_list=qubit_list, circlabel='mcal')
meas_filter = meas_fitter.filter
# have a look at the average measurement fidefily of this device:
print("Average Measurement Fidelity was: %f" % meas_fitter.readout_fidelity(), "for",project_device)
```
### Obtain the matrix of probability distribution of shape(nb_state,nb_labels) used by the classifier
```
empirical_dic = json_dic_loader('experimental_'+project_name)
test_dic = json_dic_loader('test_'+project_name)
def rectify_counts(tot_res, test_cqi,mitigation,m_filter) :
void_counts = dict(zip(label_list, np.zeros(2**used_qubits)))
try:
counts_results_real_test = tot_res[test_cqi]
except KeyError as error:
counts_results_real_test = tot_res[str(test_cqi)]
raw_counts_test = copy.deepcopy(void_counts)
raw_counts_test.update(counts_results_real_test)
if mitigation:
mitigated_results_test = meas_filter.apply(raw_counts_test, method = 'least_squares')
returned_counts = copy.deepcopy(void_counts)
returned_counts.update(mitigated_results_test)
else:
returned_counts = copy.deepcopy(raw_counts_test)
return returned_counts
def get_clean_matrix(dic, mitigation,m_filter):
clean_matrix = np.ndarray((nb_states,nb_labels))
for i_state in range(nb_states):
rectified_counts = rectify_counts(dic,i_state, mitigation,m_filter) # get a rectified counts dictionary
clean_matrix[i_state, :] = list(rectified_counts.values())
clean_matrix = clean_matrix/clean_matrix.sum(axis=1, keepdims=True)
return clean_matrix
def obtain_pooled_PDM(mitigation):
PD_exper = get_clean_matrix(empirical_dic, mitigation=mitigation,
m_filter=meas_filter)
PD_test = get_clean_matrix(test_dic, mitigation=mitigation,
m_filter=meas_filter)
return PD_exper + PD_test
PD_tot = obtain_pooled_PDM(mitigation=False)/2
PD_totm = obtain_pooled_PDM(mitigation=True)/2
print(project_device, "circuit #",i_state_test,
"circuit depth:",real_circs[i_state_test].depth())
print('gates = ',real_circs[i_state_test].count_ops())
ideal_results = dict(zip(label_list,PD_ideal[i_state_test]))
real_results = dict(zip(label_list,PD_tot[i_state_test]))
mit_results = dict(zip(label_list,PD_totm[i_state_test]))
plot_histogram([ideal_results, real_results, mit_results],
legend=['ideal device','real results on\n '+ project_device, 'after measurement\n errror mitigation'],
color =["b","r","g"],
bar_labels=False,
figsize=(10.,5.))
# Matrix of distances between distributions
# Numbers in squares are "distances expressed per thousand", thus from 0 to 1000
Y_dist_tot = cdist(PD_tot,PD_ideal, metric='sqeuclidean')
# !cdist(1st matrix -> Y rows, 2d matrix -> Y columns)
# adapted from https://stackoverflow.com/questions/40887753/display-matrix-values-and-colormap:
fig, ax = plt.subplots(figsize=(10.,10.))
figsize=(20.,20.)
min_val, max_val = np.min(Y_dist_tot), np.max(Y_dist_tot)
ax.matshow(Y_dist_tot, cmap=plt.cm.Reds)
for i in range(20):
for j in range(20):
c = round(1000*Y_dist_tot[j,i])
ax.text(i, j, str(c), va='center', ha='center')
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
```
# default_exp layers
```
# Useful Layers
> Some Pytorch layers needed for MetNet
```
#export
from fastai.vision.all import *
from fastai.text.all import WeightDropout, RNNDropout
```
## ConvLSTM / ConvGRU layers
### CGRU
https://github.com/jhhuang96/ConvLSTM-PyTorch/blob/master/ConvRNN.py
In a GRU cell the outputs and hidden are the same, last output must be equal to last hidden.
```
#export
class ConvGRUCell(Module):
def __init__(self, input_dim, hidden_dim, kernel_size=(3,3), bias=True, activation=F.tanh, batchnorm=False):
"""
Initialize ConvGRU cell.
Parameters
----------
input_dim: int
Number of channels of input tensor.
hidden_dim: int
Number of channels of hidden state.
kernel_size: (int, int)
Size of the convolutional kernel.
bias: bool
Whether or not to add the bias.
"""
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.kernel_size = kernel_size if isinstance(kernel_size, (tuple, list)) else [kernel_size]*2
self.padding = self.kernel_size[0] // 2, self.kernel_size[1] // 2
self.bias = bias
self.activation = activation
self.batchnorm = batchnorm
self.conv_zr = nn.Conv2d(in_channels=self.input_dim + self.hidden_dim,
out_channels=2 * self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.conv_h1 = nn.Conv2d(in_channels=self.input_dim,
out_channels=self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.conv_h2 = nn.Conv2d(in_channels=self.hidden_dim,
out_channels=self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.reset_parameters()
def forward(self, input, h_prev=None):
#init hidden on forward
if h_prev is None:
h_prev = self.init_hidden(input)
combined = torch.cat((input, h_prev), dim=1) # concatenate along channel axis
combined_conv = F.sigmoid(self.conv_zr(combined))
z, r = torch.split(combined_conv, self.hidden_dim, dim=1)
h_ = self.activation(self.conv_h1(input) + r * self.conv_h2(h_prev))
h_cur = (1 - z) * h_ + z * h_prev
return h_cur
def init_hidden(self, input):
bs, ch, h, w = input.shape
return one_param(self).new_zeros(bs, self.hidden_dim, h, w)
def reset_parameters(self):
#self.conv.reset_parameters()
nn.init.xavier_uniform_(self.conv_zr.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_zr.bias.data.zero_()
nn.init.xavier_uniform_(self.conv_h1.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_h1.bias.data.zero_()
nn.init.xavier_uniform_(self.conv_h2.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_h2.bias.data.zero_()
if self.batchnorm:
self.bn1.reset_parameters()
self.bn2.reset_parameters()
cgru_cell = ConvGRUCell(16, 32, 3)
cgru_cell(torch.rand(1, 16, 16, 16)).shape
```
Let's check:
```
#export
class ConvGRU(nn.Module):
def __init__(self, input_dim, hidden_dim, kernel_size, n_layers, batch_first=True,
bias=True, activation=F.tanh, input_p=0.2, hidden_p=0.1, batchnorm=False):
super(ConvGRU, self).__init__()
self._check_kernel_size_consistency(kernel_size)
# Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers
kernel_size = self._extend_for_multilayer(kernel_size, n_layers)
hidden_dim = self._extend_for_multilayer(hidden_dim, n_layers)
activation = self._extend_for_multilayer(activation, n_layers)
if not len(kernel_size) == len(hidden_dim) == len(activation) == n_layers:
raise ValueError('Inconsistent list length.')
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.kernel_size = kernel_size
self.n_layers = n_layers
self.batch_first = batch_first
self.bias = bias
self.input_p = input_p
self.hidden_p = hidden_p
cell_list = []
for i in range(self.n_layers):
cur_input_dim = self.input_dim if i == 0 else self.hidden_dim[i-1]
cell_list.append(ConvGRUCell(input_dim=cur_input_dim,
hidden_dim=self.hidden_dim[i],
kernel_size=self.kernel_size[i],
bias=self.bias,
activation=activation[i],
batchnorm=batchnorm))
self.cell_list = nn.ModuleList(cell_list)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([nn.Dropout(hidden_p) for l in range(n_layers)])
self.reset_parameters()
def __repr__(self):
s = f'ConvGru(in={self.input_dim}, out={self.hidden_dim[0]}, ks={self.kernel_size[0]}, '
s += f'n_layers={self.n_layers}, input_p={self.input_p}, hidden_p={self.hidden_p})'
return s
def forward(self, input, hidden_state=None):
"""
Parameters
----------
input_tensor:
5-D Tensor either of shape (t, b, c, h, w) or (b, t, c, h, w)
hidden_state:
Returns
-------
last_state_list, layer_output
"""
input = self.input_dp(input)
cur_layer_input = torch.unbind(input, dim=int(self.batch_first))
if hidden_state is None:
hidden_state = self.get_init_states(cur_layer_input[0])
seq_len = len(cur_layer_input)
layer_output_list = []
last_state_list = []
for l, (gru_cell, hid_dp) in enumerate(zip(self.cell_list, self.hidden_dps)):
h = hidden_state[l]
output_inner = []
for t in range(seq_len):
h = gru_cell(input=cur_layer_input[t], h_prev=h)
output_inner.append(h)
cur_layer_input = torch.stack(output_inner) #list to array
if l != self.n_layers: cur_layer_input = hid_dp(cur_layer_input)
last_state_list.append(h)
layer_output = torch.stack(output_inner, dim=int(self.batch_first))
last_state_list = torch.stack(last_state_list, dim=0)
return layer_output, last_state_list
def reset_parameters(self):
for c in self.cell_list:
c.reset_parameters()
def get_init_states(self, input):
init_states = []
for gru_cell in self.cell_list:
init_states.append(gru_cell.init_hidden(input))
return init_states
@staticmethod
def _check_kernel_size_consistency(kernel_size):
if not (isinstance(kernel_size, tuple) or (isinstance(kernel_size, list)
and all([isinstance(elem, tuple) for elem in kernel_size]))):
raise ValueError('`kernel_size` must be tuple or list of tuples')
@staticmethod
def _extend_for_multilayer(param, num_layers):
if not isinstance(param, list):
param = [param] * num_layers
return param
cgru = ConvGRU(16, 32, (3, 3), 2)
cgru
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6))
layer_output.shape
last_state_list.shape
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6), last_state_list)
```
# Export -
```
# hide
from nbdev.export import *
notebook2script()
```
| github_jupyter |
# Transfer Learning
대부분의 경우, 전체 네트워크를 새로 학습하는 것은 시간, 자원, 노력등의 낭비를 초래합니다.
예를 들어 ImageNet과 같은 대규모 데이터셋에 대한 최신 ConvNets에 대한 훈련은 여러 GPU에서 몇 주가 걸립니다.
대신, 대부분의 사람들은 미리 훈련된 네트워크를 이용하여 feature extractor로 사용하거나 fine-tuning을 하기 위한 초기 네트워크로 사용합니다.
이 포스트에서는 미리 훈련된 VGGNet을 이용해서 꽃을 분류하는 Classifier를 만들어보겠습니다.
[VGGNet](https://arxiv.org/pdf/1409.1556.pdf)은 [ImageNet dates](http://www.image-net.org/)에 대해 학습되어있는 네트워크입니다.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet은 간단하고 우수한 성능을 갖추고 있어 ImageNet Competetion에서 2위를 차지했습니다.
여기서의 아이디어는 우리가 모든 Convolution Layer를 유지하되, 최종적인 Fully-Connected Layer를 우리가 원하는 Classifier로 만들겁니다.
이렇게 하면 VGGNet을 이미지용 feature extractor로 사용해 간단한 분류자를 쉽게 학습할 수 있습니다.
지금부터 ReLU를 이용한 임계값 설정을 포함하여 4096개의 node가 완전히 연결된 첫 번째 계층을 살펴보겠습니다.
이러한 값을 각 이미지에 대한 코드로 사용하여 해당 코드 위에 classifier를 작성할 수 있습니다.
만약 transfer learning에 대해 더 알고 싶다면 [the CS231n course notes](http://cs231n.github.io/transfer-learning/#tf)를 읽어보세요.
## Pretrained VGGNet
우리는 미리 학습되어 있는 VGG 네트워크를 사용하기로 했습니다.
VGG Network가 tensorflow에 적용된 버전의 코드는 [이 곳](https://github.com/machrisaa/tensorflow-vgg)에 업로드 되어 있습니다.
이 notebook을 clone한 후 우리의 Classifier를 만들어보도록 하겠습니다.
```
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
```
## Flower power
여기서는 VGGNet을 사용하여 꽃의 이미지를 분류할 것입니다.
꽃 데이터 세트를 얻으려면 아래 코드를 이용합니다.
이 데이터셋은 [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining)에서 제공됩니다.
```
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
```
## ConvNet Codes
아래에서는 데이터셋의 모든 이미지를 실행하고 각 이미지에 대한 코드를 가져옵니다.
즉, VGGNet corvolution 레이어를 통해 영상을 실행하고 첫 번째 완전히 연결된 레이어의 값을 기록합니다.
그런 다음 Classifier를 만들 때 나중에 파일에 쓸 수 있습니다.
여기서는 텐서플로우_vgg의 vg16 모듈을 사용하고 있습니다.
네트워크는 224×224×3224×224×3 크기의 이미지를 입력으로 사용합니다. 그리고 5 세트의 Convolution Layer들이 있습니다.
여기에 구현된 네트워크에는 다음과 같은 구조가 있습니다. ([the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py))
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
그래서 우리가 원하는 것은 ReLUd (`self.relu6`) 이후의 첫 번째 Fully connected layer 입니다.
네트워크를 구축하기 위해 우리는 아래 코드를 사용합니다.
```
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
```
이렇게 하면 `vgg` object를 만든 다음 'vg.build(input_)'로 그래프를 만듭니다.
그런 다음 layer로부터 값을 가져옵니다.
```
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
```
```
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
```
이제 이미지를 VGG Network에 통과시켜 feature를 추출하고 저장하겠습니다.
이 추출한 feature들을 `code`라고 부르겠습니다.
```
# Batch Size를 정합니다. 만약 GPU 메모리가 있다면 높입니다.
batch_size = 128
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None,224,224,3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# ii=1, file= file_name01
# ii=2, file= file_name02 ...
#
# 이미지를 현재 Batch에 추가합니다.
# utils.load_image는 input image를 중간부분부터 잘라줍니다.
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Batch를 네트워크에 실행시킵니다.
if ii % batch_size == 0 or ii == len(files):
# Batch를 VGG Network에 통과시킵니다.
images = np.concatenate(batch)
# TODO: relu6 레이어로부터 값을 얻습니다.
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# 다음 batch를 만들기 위해 reset합니다.
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
```
## Building the Classifier
이제 모든 이미지에 대한 코드가 있으므로 그 위에 간단한 classifier를 만들 수 있습니다.
그 코드는 단순한 MLP와 같습니다.
아래에서는 대부분의 작업을 수행하도록 하겠습니다.
```
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
```
### Data prep
평소처럼 이제 라벨을 한 번 인코딩하고 validation/test 세트를 작성해야 합니다.
우선, 라벨을 만들겠습니다.
one-hot encoded vector를 만들기 위해 scikit-learn의 [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html)를 사용합니다.
```
from sklearn import preprocessing
# Your one-hot encoded labels array here
lb = preprocessing.LabelBinarizer()
labels_vecs = lb.fit_transform(labels)
print(labels_vecs)
```
이제 training, validation, and test 세트를 생성하고자 합니다.
여기서 주목할 중요한 점은 우리의 라벨과 데이터가 아직 랜덤화되지 않았다는 것입니다.
Validation 및 Test 세트에 모든 클래스의 데이터가 포함되도록 데이터를 섞어야 합니다.
그렇지 않으면 모두 한 클래스인 테스트 세트로 끝날 수 있습니다. 또한 일반적으로 각 작은 집합에는 전체 데이터 집합과 동일한 클래스의 분포가 있는지 확인해야 합니다.
가장 쉬운 방법은 scikit-learn의 [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) 을 사용하는 것입니다.
StratifiedShuffleSplit를 사용하면 splitter를 다음과 같이 만들 수 있습니다.
```
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
```
그리고 data를 다음과 같이 split합니다.
```
splitter = ss.split(x, y)
```
s.split은 indice의 생성기를 반환합니다. 인덱스를 배열로 전달하여 split 세트를 가져올 수 있습니다. 생성기라는 것은 생성기를 반복하거나 다음(Splitter)을 사용하여 인덱스를 구해야 한다는 것을 의미합니다. [사용자 가이드](http://scikit-learn.org/stable/modules/cross_validation.html#random-permutations-cross-validation-a-k-a-shuffle-split) 및 [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html)을 읽어 보세요.
```
from sklearn.model_selection import StratifiedShuffleSplit
#객체 생성: StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_index, val_index = next(ss.split(codes, labels_vecs))
val_index, test_index = val_index[:int(len(val_index)/2)], val_index[:int(len(val_index)/2)]
train_x, train_y = codes[train_index], labels_vecs[train_index]
val_x, val_y =codes[val_index], labels_vecs[val_index]
test_x, test_y = codes[test_index], labels_vecs[test_index]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
```
Train, Test, Validation Set으로 잘 나누어졌네요.
### Classifier layers
자, 다시한번 복기하겠습니다.
먼저 image들을 VGGNet을 통과시켜 4096D Vector로 된 feature를 추출했습니다. 이걸 code라고 불렀습니다.
이 feature를 input으로 하는 fully connected layer에 통과시키면 최종적으로 class에 대한 output이 나오게 됩니다.
우리는 fully-connected layer를 학습해 flower-classifier를 만들기 전에, 먼저 image들을 VGGNet에 통과시켜 feature들을 전부 뽑아놓고 저장했습니다.
이제 이렇게 저장한 Convolution code들을 가지고 있으므로 이제 fully connected layer로 부터 Classifier를 만들면 됩니다.
input으로 code를 사용하고 target으로 image label을 사용합니다.
따라서 이것은 전형적인 Multi-layer Perceptron Neural Network가 됩니다.
```
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256)
# output layer logits
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
# cross entropy loss
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
# training optimizer
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
```
### Batches!
Batch를 하기 위한 간단한 방법을 짰습니다. 이 코드는 모든 데이터를 포함하도록 작성했습니다.
때로는 데이터가 완전히 Batch되었는지 확인하기 위해 끝에 데이터를 몇 개 던질 수도 있습니다.
나머지 데이터를 포함하도록 마지막 배치를 확장하겠습니다.
```
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
```
### Training
이제, 학습을 시작합니다.
```
saver = tf.train.Saver()
epochs = 10
iteration = 0
with tf.Session() as sess:
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x,y in get_batches(train_x, train_y):
feed = {inputs_:x, labels_:y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed )
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_ : val_x, labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
```
### Testing
잘 학습되었는지 test해보도록 하겠습니다.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
```
이제 실제 이미지를 가지고 분류를 해보겠습니다.
```
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
```
꽃에는 튤립과 장미가 있다고 하네요. 잘 분류됬습니다!
| github_jupyter |
# 03. Deploying Jupyter
## Overview
In this notebook, you will learn how to:
- Configure remote Jupyter deployment.
- Deploy Jupyter on a compute node.
- Access deployed Jupyter Notebook.
## Import idact
It's recommended that *idact* is installed with *pip*. Alternatively, make sure the dependencies are installed: `pip install -r requirements.txt`, and add *idact* to path, for example:
```
import sys
sys.path.append('../')
```
We will use a wildcard import for convenience:
```
from idact import *
import bitmath
```
## Load the cluster
Let's load the environment and the cluster. Make sure to use your cluster name.
```
load_environment()
cluster = show_cluster("hpc")
cluster
access_node = cluster.get_access_node()
access_node.connect()
```
## Configure remote Jupyter deployment
### Install Jupyter on the cluster
Make sure Jupyter is installed with the Python 3.5+ distribution you intend to use on the cluster. The recommended version is JupyterLab.
See [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html), [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html).
If you encounter any problems with deployment, this may be due to some library versions being incompatible. You can try installing frozen versions included with the *idact* repo in `envs/dask_jupyter_tornado.txt`:
```
pip install -r dask_jupyter_tornado.txt
```
You may need to add `--user`, if you are using a system Python distribution.
### Specify setup actions
It's rare that the default Python distribution is the one you want to use for computation.
Depending on your setup, you probably need to modify the `PATH` and `PYTHONPATH` environment variables, `source activate` a Conda environment, or perform other specific steps.
In order for *idact* to find and execute the proper binaries, you'll need to specify these steps as a list of Bash script lines. Make sure to modify the list below to fit your needs.
```
cluster.config.setup_actions.jupyter = ['module load plgrid/tools/python-intel/3.6.2']
save_environment()
```
### Choose JupyterLab or Jupyter Notebook
By default, JupyterLab is used. If you want to use regular Jupyter Notebook, set the config entry below to False.
```
cluster.config.use_jupyter_lab = True
save_environment()
```
## Allocate node for Jupyter
We will deploy Jupyter on a single node. Make sure to adjust the `--account` parameter, same as in the previous notebook.
```
nodes = cluster.allocate_nodes(nodes=1,
cores=2,
memory_per_node=bitmath.GiB(10),
walltime=Walltime(minutes=10),
native_args={
'--account': 'intdata'
})
nodes
nodes.wait()
nodes
```
Let's test the connection, just in case:
```
nodes[0].run('hostname')
```
## Deploy Jupyter
After the initial setup, Jupyter can be deployed with a single command:
```
nb = nodes[0].deploy_notebook()
nb
```
If the deployment succeeded, you can open the deployed notebook in the browser:
```
nb.open_in_browser()
```
Confirm that there are no issues with the deployed Jupyter Notebook instance. Try to start a kernel and see if it looks stable. Make sure the version of Python you expected is used.
If the Jupyter deployment failed for some reason, you will find the `jupyter` command log in the debug log file: `idact.log`.
If your last failure is a timeout, e.g. `2018-11-12 22:14:00 INFO: Retried and failed: config.retries(...)`, check out the tutorial `07. Adjusting timeouts` if you believe the timeout might be too restrictive for your cluster.
After you're done, you can cancel the deployment by calling `cancel`, though it will be killed anyway when the node allocation ends.
```
nb.cancel()
```
Alternatively, the following will just close the tunnel, without attempting to kill Jupyter:
```
nb.cancel_local()
```
## Cancel the allocation
It's important to cancel an allocation if you're done with it early, in order to minimize the CPU time you are charged for.
```
nodes.running()
nodes.cancel()
nodes.running()
```
## Next notebook
In the next notebook, we will deploy a Dask.distributed scheduler and workers on several compute nodes, and browse their dashboards.
| github_jupyter |
## Custom camera projection
User defined ray distribution: ray origins and directions in camera textures.
```
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from plotoptix import NpOptiX
from plotoptix.materials import m_flat
from plotoptix.geometry import PinnedBuffer
```
Create the raytracer:
```
width = 1000
height = 1000
def update_image(rt: NpOptiX) -> None:
img.set_data(rt._img_rgba)
plt.draw()
rt = NpOptiX(on_launch_finished=update_image, width=width, height=height, start_now=False)
```
Setup for the usual image making - camera, lights and ambient, postprocessing. This setup is needed only to show how the scene looks like, it is not needed for calculation and displaying the hit and distance information.
```
rt.set_param(min_accumulation_step=4, max_accumulation_frames=100)
rt.set_float("tonemap_exposure", 1.0)
rt.set_float("tonemap_gamma", 2.2)
rt.add_postproc("Gamma")
rt.setup_light("light1", pos=[-15, 10, 30], color=2, radius=8)
rt.setup_light("light2", pos=[-15, -10, 30], color=2, radius=8)
rt.set_ambient([0.03, 0.04, 0.05])
rt.set_background(0)
rt.setup_camera("cam1", eye=[20, 25, 60], fov=25)
```
Create and upload two surfaces. Use the default, *diffuse* material for the moment.
```
# a smooth surface that will be used as a source o rays later:
rxz = (-10, 10)
n = 500
xz = np.linspace(rxz[0], rxz[1], n)
X, Z = np.meshgrid(xz, xz)
Y1 = np.sin(np.sqrt(X**2 + Z**2)) - 1
rt.set_data_2d("surface1", Y1, c=[0.9, 0.8, 0.7],
range_x=rxz, range_z=rxz,
make_normals=True)
# second surface, placed above the first one (a more coarse, for fun)
rxz2 = (-15, 15)
n2 = 20
xz2 = np.linspace(rxz2[0], rxz2[1], n2)
X2, Z2 = np.meshgrid(xz2, xz2)
Y2 = np.sin(np.sqrt((1.5*X2+0.3)**2 + (Z2+0.3)**2)) + 2
rt.set_data_2d("surface2", Y2, c=[0.7, 0.8, 0.9],
range_x=rxz2, range_z=rxz2,
make_normals=False)
```
Show the output image here:
```
plt.figure(1)
img = plt.imshow(np.zeros((height,width)), cmap=plt.get_cmap("plasma"))
plt.tight_layout()
```
Start the ray tracing:
```
rt.start()
```
Have a look from another angle to see the other surface better:
```
rt.update_camera("cam1", eye=[20, -15, 60], fov=25)
```
Now, change the configuration to the **custom projection** camera.
- prepare textures with ray origins and directions
- use flat shading material for performance
- display hit info instead rgb data
```
# use mesh data of the `surface1` geometry to create textures
with PinnedBuffer(rt.geometry_data["surface1"], "Positions") as P:
eye = np.zeros((n,n,4), dtype=np.float32)
eye[:,:,:3] = P.reshape(n,n,3)
rt.set_texture_2d("eye", eye)
with PinnedBuffer(rt.geometry_data["surface1"], "Vectors") as N:
rdir = np.zeros((n,n,4), dtype=np.float32)
rdir[:,:,:3] = N.reshape(n,n,3)
rt.set_texture_2d("dir", rdir)
rt.setup_camera("cam2", cam_type="CustomProjXYZtoDir", textures=["eye", "dir"])
```
Display distance from the ray origin to the first hit:
```
# NOTE: no need for multiple passes if only the distance is calculated, so set up just 1 pass:
rt.set_param(min_accumulation_step=1, max_accumulation_frames=1)
# flat shading material - no secondary rays are traced
rt.setup_material("flat", m_flat)
rt.update_data_2d("surface1", mat="flat")
rt.update_data_2d("surface2", mat="flat")
# and the new callback function
def update_image(rt: NpOptiX) -> None:
dist = rt._hit_pos[:,:,3].reshape(rt._height, rt._width) # hit distance from the ray origin
fid = rt._geo_id[:,:,1].reshape(rt._height, rt._width) # face id data, or empty region signature
dmax = np.amax(dist[fid < 0xFFFFFFFF])
dmin = np.amin(dist[fid < 0xFFFFFFFF])
img.set_data(dist) # update figure using distance data
img.set_clim(vmin=dmin, vmax=dmax)
plt.draw()
rt.set_launch_finished_cb(update_image)
```
Close the ray-tracer.
```
rt.close()
```
| github_jupyter |
# PeriodicityDetector QuickStart
-----------------------------------
##### In this notebook we will demonstrate initializing an Observations class - a time resolves observation series - and the PeriodicityDetector class to detect periodicity in the series.
### 1 Using the PeriodicityDetector class to run PDC on simulated velocity times series
`Observations` class enables one to load observation data from a given folder
and place it into a TimeSeries object, or to load an existing time series.
In this case we will choose the latter.
```
import sys
from sparta import Observations
from sparta.UNICOR.Spectrum import Spectrum
from sparta.UNICOR.Template import Template
```
In the example below we simulate a sinusoidal wave velocity time series and store it in a time series object.
```
import random
import numpy as np
from sparta.Auxil.TimeSeries import TimeSeries
period = 5
size = 15
times = [(random.random() * 100) for _ in range(size)]
vals = [np.sin(t * 2 * np.pi / period) for t in times]
time_series = TimeSeries(size=size, times=times, vals=vals, period=period)
time_series.plot_velocities()
```
<br />
At this point we can use the Observation class to match a PeriodicityDetector to the time series:
<br />
```
obs = Observations(time_series=time_series)
obs.initialize_periodicity_detector(periodogram_grid_resolution=100, freq_range=(0.01, 1))
obs.periodicity_detector.period = [5]
```
<br />
Now using the PeriodicityDetector to detect periodicity in the data using the GLS and PDC method is posible:
<br />
```
obs.periodicity_detector.run_PDC_process(calc_biased_flag=False, calc_unbiased_flag=True)
obs.periodicity_detector.run_GLS_process()
obs.periodicity_detector.periodogram_plots()
```
### 2. Using the PeriodicityDetector class to run USuRPer on simulated spectra
In the following code segment we simulate a single-line stereoscopic binary system (SB1) and using sparta classes to detect
periodicity in the data using USuRPer.
First we initiate a Template object containing Phoenix library synthetic spectra.
```
# Assigning sun-like stellar parameters to the simulated spectra
temp = 4900
log_g = 4.5
metal = 0
alpha = 0
# Choosing wavelength range (Angstrom units)
min_val = 4900
max_val = 5100
# Loading a Phoenix synthetic spectrum
template = Template(temp=temp, log_g=log_g, metal=metal, alpha=alpha, min_val=min_val, max_val=max_val)
```
At this point, we use the template to simulate a single-line stereoscopic binary system (SB1) by generating 50 visits over 100 days,
and shifting them in sinusoidal wave velocities.
We represent the series in a TimeSeries object, then assigning it to an Observations object.
```
p = 5 # Assigning a five-day long period to the system.
half_amp = 100 # Assigning a 100 km/s magnitude to the velocity half amplitude change
N = 50 # Setting the number of observations to be 50
times = [(random.random() * 100) for _ in range(N)]
vals = [half_amp * np.sin(2 * t * np.pi / p) for t in times]
visit_spec_list = []
for i, v in enumerate(vals):
new_wl = template.doppler(v)
new_temp = Spectrum(wv=new_wl, sp=template.add_noise(-1)).SpecPreProccess()
visit_spec_list.append(new_temp)
ts = TimeSeries(size=N, times=times, vals=visit_spec_list,
calculated_vrad_list=[])
obs = Observations(time_series=ts)
print("Simulated SB1 TimeSeries is ready and assigned to an Observations object")
```
Now we can run USURPER to detect periodicity in the observations, and print out the resulted periodogram.
```
obs.initialize_periodicity_detector(freq_range=(1/1000, 1), periodogram_grid_resolution=1000)
obs.periodicity_detector.period = [5]
print("Starting USuRPER... ")
obs.periodicity_detector.run_USURPER_process(calc_biased_flag=False, calc_unbiased_flag=True)
obs.periodicity_detector.periodogram_plots()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import sys
# sample source
m = 2000
dim = 200
n = m
seed = 11
#ss = data.SSGaussMeanDiff(dim, my=1.0)
ss = data.SSGaussVarDiff(dim)
#ss = data.SSBlobs()
dim = ss.dim()
tst_data = ss.sample(m, seed=seed+1)
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=100)
#te = tst_data
```
## smooth CF test
```
J = 7
alpha = 0.01
smooth_cf = tst.SmoothCFTest.create_randn(te, J, alpha=alpha, seed=seed)
smooth_cf.perform_test(te)
```
## grid search to choose the best Gaussian width
```
def randn(J, d, seed):
rand_state = np.random.get_state()
np.random.seed(seed)
M = np.random.randn(J, d)
np.random.set_state(rand_state)
return M
T_randn = randn(J, dim, seed)
mean_sd = tr.mean_std()
scales = 2.0**np.linspace(-4, 4, 30)
#list_gwidth = mean_sd*scales*(dim**0.5)
list_gwidth = np.hstack( (mean_sd*scales*(dim**0.5), 2**np.linspace(-8, 8, 20) ))
list_gwidth.sort()
besti, powers = tst.SmoothCFTest.grid_search_gwidth(tr, T_randn, list_gwidth, alpha)
# plot
plt.plot(list_gwidth, powers, 'o-')
plt.xscale('log', basex=2)
plt.xlabel('Gaussian width')
plt.ylabel('Test power')
plt.title('Mean std: %.3g. Best chosen: %.2g'%(mean_sd, list_gwidth[besti]) )
med = util.meddistance(tr.stack_xy())
print('med distance xy: %.3g'%med)
# actual test
best_width = list_gwidth[besti]
scf_grid = tst.SmoothCFTest(T_randn, best_width, alpha)
scf_grid.perform_test(te)
```
## optimize test frequencies
```
op = {'n_test_freqs': J, 'seed': seed, 'max_iter': 300,
'batch_proportion': 1.0, 'freqs_step_size': 0.1,
'gwidth_step_size': 0.01, 'tol_fun': 1e-4}
# optimize on the training set
test_freqs, gwidth, info = tst.SmoothCFTest.optimize_freqs_width(tr, alpha, **op)
scf_opt = tst.SmoothCFTest(test_freqs, gwidth, alpha=alpha)
scf_opt_test = scf_opt.perform_test(te)
scf_opt_test
# plot optimization results
# trajectories of the Gaussian width
gwidths = info['gwidths']
fig, axs = plt.subplots(2, 2, figsize=(10, 9))
axs[0, 0].plot(gwidths)
axs[0, 0].set_xlabel('iteration')
axs[0, 0].set_ylabel('Gaussian width')
axs[0, 0].set_title('Gaussian width evolution')
# evolution of objective values
objs = info['obj_values']
axs[0, 1].plot(objs)
axs[0, 1].set_title('Objective $\lambda(T)$')
# trajectories of the test locations
# iters x J. X Coordinates of all test locations
locs = info['test_freqs']
for coord in [0, 1]:
locs_d0 = locs[:, :, coord]
J = locs_d0.shape[1]
axs[1, coord].plot(locs_d0)
axs[1, coord].set_xlabel('iteration')
axs[1, coord].set_ylabel('index %d of test_locs'%(coord))
axs[1, coord].set_title('evolution of %d test locations'%J)
print('optimized width: %.3f'%gwidth)
```
## SCF: optimize just the Gaussian width
```
op_gwidth = {'max_iter': 300,'gwidth_step_size': 0.1,
'batch_proportion': 1.0, 'tol_fun': 1e-4}
# optimize on the training set
rand_state = np.random.get_state()
np.random.seed(seed=seed)
T0_randn = np.random.randn(J, dim)
np.random.set_state(rand_state)
med = util.meddistance(tr.stack_xy())
gwidth, info = tst.SmoothCFTest.optimize_gwidth(tr, T0_randn, med**2, **op_gwidth)
# trajectories of the Gaussian width
gwidths = info['gwidths']
fig, axs = plt.subplots(1, 2, figsize=(10, 4))
axs[0].plot(gwidths)
axs[0].set_xlabel('iteration')
axs[0].set_ylabel('Gaussian width')
axs[0].set_title('Gaussian width evolution')
# evolution of objective values
objs = info['obj_values']
axs[1].plot(objs)
axs[1].set_title('Objective $\lambda(T)$')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/moustafa-7/ChatBot-Project/blob/master/Code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import gc
gc.collect()
!pip install argparse
import os
import requests
import time
import argparse
import os
import json
from requests.compat import urljoin
import gensim
from chatterbot import ChatBot
from bs4 import BeautifulSoup
import requests
import pandas as pd
import numpy as np
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import re
import nltk
import sklearn
from sklearn.model_selection import train_test_split
#import train_test_splitnltk.download("stopwords")
from nltk.corpus import stopwords
from sklearn.multiclass import OneVsRestClassifier
!pip install chatterbot
!pip install chatterbot-corpus
!pip install requests
!pip install chatterbot
!gunzip GoogleNews-vectors-negative300.bin.gz
import gensim
import pickle
import re
import nltk
from nltk.corpus import stopwords
import numpy as np
from sklearn.metrics.pairwise import pairwise_distances_argmin
# We will need this function to prepare text at prediction time
def text_prepare(text):
"""Performs tokenization and simple preprocessing."""
replace_by_space_re = re.compile('[/(){}\[\]\|@,;]')
bad_symbols_re = re.compile('[^0-9a-z #+_]')
stopwords_set = set(stopwords.words('english'))
text = text.lower()
text = replace_by_space_re.sub(' ', text)
text = bad_symbols_re.sub('', text)
text = ' '.join([x for x in text.split() if x and x not in stopwords_set])
return text.strip()
# need this to convert questions asked by user to vectors
def question_to_vec(question, embeddings, dim=300):
"""
question: a string
embeddings: dict where the key is a word and a value is its' embedding
dim: size of the representation
result: vector representation for the question
"""
word_tokens = question.split(" ")
question_len = len(word_tokens)
question_mat = np.zeros((question_len,dim), dtype = np.float32)
for idx, word in enumerate(word_tokens):
if word in embeddings:
question_mat[idx,:] = embeddings[word]
# remove zero-rows which stand for OOV words
question_mat = question_mat[~np.all(question_mat == 0, axis = 1)]
# Compute the mean of each word along the sentence
if question_mat.shape[0] > 0:
vec = np.array(np.mean(question_mat, axis = 0), dtype = np.float32).reshape((1,dim))
else:
vec = np.zeros((1,dim), dtype = np.float32)
return vec
class SimpleDialogueManager_2(object):
"""
This is a simple dialogue manager to test the telegram bot.
The main part of our bot will be written here.
"""
def __init__(self):
# Instantiate all the models and TFIDF Objects.
print("Loading resources...")
# Instantiate a Chatterbot for Chitchat type questions
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
chatbot = ChatBot('MLWhizChatterbot')
trainer = ChatterBotCorpusTrainer(chatbot)
trainer.train('chatterbot.corpus.english')
self.chitchat_bot = chatbot
print("Loading Word2vec model...")
# Instantiate the Google's pre-trained Word2Vec model.
self.model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
print("Loading Classifier objects...")
# Load the intent classifier and tag classifier
self.intent_recognizer = pickle.load(open('resources/intent_clf.pkl', 'rb'))
self.tag_classifier = pickle.load(open('resources/tag_clf.pkl', 'rb'))
# Load the TFIDF vectorizer object
self.tfidf_vectorizer = pickle.load(open('resources/tfidf.pkl', 'rb'))
print("Finished Loading Resources")
# We created this function just above. We just need to have a function to get most similar question's *post id* in the dataset given we know the programming Language of the question. Here it is:
def get_similar_question(self,question,tag):
# get the path where all question embeddings are kept and load the post_ids and post_embeddings
embeddings_path = 'resources/embeddings_folder/' + tag + ".pkl"
post_ids, post_embeddings = pickle.load(open(embeddings_path, 'rb'))
# Get the embeddings for the question
question_vec = question_to_vec(question, self.model, 300)
# find index of most similar post
best_post_index = pairwise_distances_argmin(question_vec,
post_embeddings)
# return best post id
return post_ids[best_post_index]
def generate_answer(self, question):
prepared_question = text_prepare(question)
features = self.tfidf_vectorizer.transform([prepared_question])
# find intent
intent = self.intent_recognizer.predict(features)[0]
# Chit-chat part:
if intent == 'dialogue':
response = self.chitchat_bot.get_response(question)
# Stack Overflow Question
else:
# find programming language
tag = self.tag_classifier.predict(features)[0]
# find most similar question post id
post_id = self.get_similar_question(question,tag)[0]
# respond with
response = 'I think its about %s\nThis thread might help you: https://stackoverflow.com/questions/%s' % (tag, post_id)
return response
class BotHandler(object):
"""
BotHandler is a class which implements all back-end of the bot.
It has three main functions:
'get_updates' — checks for new messages
'send_message' – posts new message to user
'get_answer' — computes the most relevant on a user's question
"""
def __init__(self, token, dialogue_manager):
self.token = token
self.api_url = "https://api.telegram.org/bot{}/".format(token)
self.dialogue_manager = dialogue_manager
def get_updates(self, offset=None, timeout=30):
params = {"timeout": timeout, "offset": offset}
raw_resp = requests.get(urljoin(self.api_url, "getUpdates"), params)
try:
resp = raw_resp.json()
except json.decoder.JSONDecodeError as e:
print("Failed to parse response {}: {}.".format(raw_resp.content, e))
return []
if "result" not in resp:
return []
return resp["result"]
def send_message(self, chat_id, text):
params = {"chat_id": chat_id, "text": text}
return requests.post(urljoin(self.api_url, "sendMessage"), params)
def get_answer(self, question):
if question == '/start':
return "Hi, I am your project bot. How can I help you today?"
return self.dialogue_manager.generate_answer(question)
def is_unicode(text):
return len(text) == len(text.encode())
# class SimpleDialogueManager(object):
# """
# This is a simple dialogue manager to test the telegram bot.
# The main part of our bot will be written here.
# """
# def generate_answer(self, question):
# if "Hi" in question:
# return "Hello, You"
# else:
# return "Don't be rude. Say Hi first."
def main():
# Put your own Telegram Access token here...
token = '828781554:AAEE4sdEf04fZwldjg_mW_8jB7hM8__nuXc'
simple_manager = SimpleDialogueManager_2()
bot = BotHandler(token, simple_manager)
###############################################################
print("Ready to talk!")
offset = 0
while True:
updates = bot.get_updates(offset=offset)
for update in updates:
print("An update received.")
if "message" in update:
chat_id = update["message"]["chat"]["id"]
if "text" in update["message"]:
text = update["message"]["text"]
if is_unicode(text):
print("Update content: {}".format(update))
bot.send_message(chat_id, bot.get_answer(update["message"]["text"]))
else:
bot.send_message(chat_id, "Hmm, you are sending some weird characters to me...")
offset = max(offset, update['update_id'] + 1)
time.sleep(1)
if __name__ == "__main__":
main()
class SimpleDialogueManager(object):
"""
This is a simple dialogue manager to test the telegram bot.
The main part of our bot will be written here.
"""
def __init__(self):
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
chatbot = ChatBot('MLWhizChatterbot')
trainer = ChatterBotCorpusTrainer(chatbot)
trainer.train('chatterbot.corpus.english')
self.chitchat_bot = chatbot
def generate_answer(self, question):
response = self.chitchat_bot.get_response(question)
return response
#!wget https://github.com/MLWhiz/chatbot/raw/master/data.zip
# !unzip data.zip -d data
# os.rmdir('data')
from google.colab import drive
drive.mount('/content/drive')
dialogues = pd.read_csv("data/dialogues.tsv",sep="\t")
posts = pd.read_csv("data/tagged_posts.tsv",sep="\t")
dialogues.head()
dialogues.head()
texts = list(dialogues[:200000].text.values) + list(posts[:200000].title.values)
labels = ['dialogue']*200000 + ['stackoverflow']*200000
data = pd.DataFrame({'text':texts,'target':labels})
def text_prepare(text):
"""Performs tokenization and simple preprocessing."""
replace_by_space_re = re.compile('[/(){}\[\]\|@,;]')
bad_symbols_re = re.compile('[^0-9a-z #+_]')
stopwords_set = set(stopwords.words('english'))
text = text.lower()
text = replace_by_space_re.sub(' ', text)
text = bad_symbols_re.sub('', text)
text = ' '.join([x for x in text.split() if x and x not in stopwords_set])
return text.strip()
# Doing some data cleaning
data['text'] = data['text'].apply(lambda x : text_prepare(x))
X_train, X_test, y_train, y_test = train_test_split(data['text'],data['target'],test_size = .1 , random_state=0)
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
# We will keep our models and vectorizers in this folder
def tfidf_features(X_train, X_test, vectorizer_path):
"""Performs TF-IDF transformation and dumps the model."""
tfv = TfidfVectorizer(dtype=np.float32, min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 3), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
X_train = tfv.fit_transform(X_train)
X_test = tfv.transform(X_test)
pickle.dump(tfv,vectorizer_path)
return X_train, X_test
X_train_tfidf, X_test_tfidf = tfidf_features(X_train, X_test, open("resources/tfidf.pkl",'wb'))
intent_recognizer = LogisticRegression(C=10,random_state=0)
intent_recognizer.fit(X_train_tfidf,y_train)
pickle.dump(intent_recognizer, open("resources/intent_clf.pkl" , 'wb'))
# Check test accuracy.
y_test_pred = intent_recognizer.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
# creating the data for Programming Language classifier
X = posts['title'].values
y = posts['tag'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
vectorizer = pickle.load(open("resources/tfidf.pkl", 'rb'))
X_train_tfidf, X_test_tfidf = vectorizer.transform(X_train), vectorizer.transform(X_test)
tag_classifier = OneVsRestClassifier(LogisticRegression(C=5,random_state=0))
tag_classifier.fit(X_train_tfidf,y_train)
pickle.dump(tag_classifier, open("resources/tag_clf.pkl", 'wb'))
# Check test accuracy.
y_test_pred = tag_classifier.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
# Load Google's pre-trained Word2Vec model.
#model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
#!wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
pretrained_embeddings_path = "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
model = gensim.models.KeyedVectors.load_word2vec_format(pretrained_embeddings_path,binary=True)
# from gensim import models
# w = models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
def question_to_vec(question, embeddings, dim=300):
"""
question: a string
embeddings: dict where the key is a word and a value is its' embedding
dim: size of the representation
result: vector representation for the question
"""
word_tokens = question.split(" ")
question_len = len(word_tokens)
question_mat = np.zeros((question_len,dim), dtype = np.float32)
for idx, word in enumerate(word_tokens):
if word in embeddings:
question_mat[idx,:] = embeddings[word]
# remove zero-rows which stand for OOV words
question_mat = question_mat[~np.all(question_mat == 0, axis = 1)]
# Compute the mean of each word along the sentence
if question_mat.shape[0] > 0:
vec = np.array(np.mean(question_mat, axis = 0), dtype = np.float32).reshape((1,dim))
else:
vec = np.zeros((1,dim), dtype = np.float32)
return vec
counts_by_tag = posts.groupby(by=['tag'])["tag"].count().reset_index(name = 'count').sort_values(['count'], ascending = False)
counts_by_tag = list(zip(counts_by_tag['tag'],counts_by_tag['count']))
print(counts_by_tag)
# import os
# os.mkdir('resources/embeddings_folder')
for tag, count in counts_by_tag:
tag_posts = posts[posts['tag'] == tag]
tag_posts
tag_post_ids = tag_posts['post_id'].values
tag_vectors = np.zeros((count, 300), dtype=np.float32)
for i, title in enumerate(tag_posts['title']):
tag_vectors[i, :] = question_to_vec(title, model, 300)
# Dump post ids and vectors to a file.
filename = 'resources/embeddings_folder/'+ tag + '.pkl'
pickle.dump((tag_post_ids, tag_vectors), open(filename, 'wb'))
from sklearn.metrics.pairwise import pairwise_distances_argmin
def get_similar_question(question,tag):
# get the path where all question embeddings are kept and load the post_ids and post_embeddings
embeddings_path = 'resources/embeddings_folder/' + tag + ".pkl"
post_ids, post_embeddings = pickle.load(open(embeddings_path, 'rb'))
# Get the embeddings for the question
question_vec = question_to_vec(question, model, 300)
# find index of most similar post
best_post_index = pairwise_distances_argmin(question_vec,
post_embeddings)
# return best post id
return post_ids[best_post_index]
get_similar_question("how to use list comprehension in python?",'python')
```
| github_jupyter |
# Equivalent layer technique for estimating total magnetization direction using
#### Importing libraries
```
% matplotlib inline
import sys
import numpy as np
import matplotlib.pyplot as plt
import cPickle as pickle
import datetime
import timeit
import string as st
from scipy.optimize import nnls
from fatiando.gridder import regular
from fatiando.utils import ang2vec, vec2ang
from fatiando.mesher import Sphere, PointGrid,Prism
from fatiando.gravmag import sphere,prism
from fatiando.constants import CM, T2NT, G, SI2MGAL
notebook_name = 'airborne_EQL_magdirection_RM_calculation.ipynb'
```
#### Importing auxiliary functions
```
dir_modules = '../../../mypackage'
sys.path.append(dir_modules)
import auxiliary_functions as fc
```
#### Loading properties of the model
```
with open('data/model_multi.pickle') as f:
model_multi = pickle.load(f)
```
#### Loading properties grid
```
with open('data/airborne_survey.pickle') as f:
airborne = pickle.load(f)
```
#### Loading data
```
with open('data/data_set.pickle') as f:
data = pickle.load(f)
```
#### Open a dictionary
```
result_RM_airb = dict()
```
### Saving files
```
saved_files = []
```
## Observation area
```
print 'Area limits: \n x_max = %.1f m \n x_min = %.1f m \n y_max = %.1f m \n y_min = %.1f m' % (airborne['area'][1],
airborne['area'][0],
airborne['area'][3],
airborne['area'][2])
```
### airborne survey information
```
print 'Shape : (%.0f,%.0f)'% airborne['shape']
print 'Number of data: %.1f' % airborne['N']
print 'dx: %.1f m' % airborne['dx']
print 'dy: %.1f m ' % airborne['dy']
```
## Properties of the model
### Main field
```
inc_gf,dec_gf = model_multi['main_field']
print'Main field inclination: %.1f degree' % inc_gf
print'Main field declination: %.1f degree' % dec_gf
```
### Magnetization direction
```
print 'Intensity: %.1f A/m' % model_multi['m_R']
print 'Inclination: %.1f degree' % model_multi['inc_R']
print 'Declination: %.1f degree' % model_multi['dec_R']
inc_R,dec_R = model_multi['inc_R'],model_multi['dec_R']
```
## Generating the layer with my function
```
h = 1250.
```
#### Generating a layer
```
shape_layer = (airborne['shape'][0],airborne['shape'][1])
xs,ys,zs = regular(airborne['area'],shape_layer,h)
```
### Levenberg-Marquardt with NNLS for positive magnetic moments
```
i_pos = 1500
it_max = 30
it_marq = 15
lamb = 10.
dlamb = 100.
eps_e = 1e-4
eps_i = 1e-4
mu_list = [1e4,1e5,4*1e5,5*1e5,1e6,1e7]
mu_norm = []
norm_r = []
norm_m = []
m_est = []
incl_est = []
decl_est = []
phi_list = []
for i in mu_list:
m_LM,inc_est,dec_est,phi,imax,pest,incs,decs = fc.levenberg_marquardt_NNLS(
data['tfa_obs_RM_airb'],airborne['x'],airborne['y'],
airborne['z'],xs,ys,zs,inc_gf,dec_gf,-10.,-10.,lamb,dlamb,i_pos,it_max,
it_marq,eps_e,eps_i,i)
G = fc.sensitivity_mag(airborne['x'],airborne['y'],airborne['z'],
xs,ys,zs,inc_gf,dec_gf,inc_est,dec_est)
tfpred = np.dot(G,m_LM)
r = data['tfa_obs_RM_airb'] - tfpred
norm_r.append(np.sqrt(np.sum(r*r)))
norm_m.append(np.sqrt(np.sum(m_LM*m_LM)))
m_est.append(m_LM)
incl_est.append(inc_est)
decl_est.append(dec_est)
phi_list.append(phi)
```
## L-curve visualization
```
title_font = 20
bottom_font = 18
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(9,9), tight_layout=True)
plt.figure(figsize=(10, 10))
plt.loglog(norm_r,norm_m, 'b-')
plt.loglog(norm_r,norm_m, 'bo')
plt.title('L-curve', fontsize=title_font)
plt.xlabel('r_norm', fontsize = title_font)
plt.ylabel('m_norm', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=15)
file_name = 'figs/airborne/Lcurve_RM'
plt.savefig(file_name+'.png',dpi=300)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=300)
saved_files.append(file_name+'.eps')
plt.show()
```
### Results
```
result_RM_airb['magnetic_moment'] = m_est
result_RM_airb['inc_est'] = incl_est
result_RM_airb['dec_est'] = decl_est
result_RM_airb['layer_depth'] = h
result_RM_airb['reg_parameter'] = mu_list
result_RM_airb['phi'] = phi_list
```
### Generating .pickle file
```
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
result_RM_airb['metadata'] = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
file_name = 'data/result_RM_airb.pickle'
with open(file_name, 'w') as f:
pickle.dump(result_RM_airb, f)
saved_files.append(file_name)
```
### Saved files
```
with open('reports/report_%s.md' % notebook_name[:st.index(notebook_name, '.')], 'w') as q:
q.write('# Saved files \n')
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
header = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
q.write('\n\n'+header+'\n\n')
for i, sf in enumerate(saved_files):
print '%d %s' % (i+1,sf)
q.write('* `%s` \n' % (sf))
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#all_slow
#default_exp vision.utils
#export
from fastai.torch_basics import *
from fastai.data.all import *
from fastai.vision.core import *
from pathlib import Path
#hide
from nbdev.showdoc import *
```
# Vision utils
> Some utils function to quickly download a bunch of images, check them and pre-resize them
```
#export
def _get_downloaded_image_filename(dest, name, suffix):
start_index = 1
candidate_name = name
while (dest/f"{candidate_name}{suffix}").is_file():
candidate_name = f"{candidate_name}{start_index}"
start_index += 1
return candidate_name
#export
def _download_image_inner(dest, inp, timeout=4, preserve_filename=False):
i,url = inp
url_path = Path(url)
suffix = url_path.suffix if url_path.suffix else '.jpg'
name = _get_downloaded_image_filename(dest, url_path.stem, suffix) if preserve_filename else f"{i:08d}"
try: download_url(url, dest/f"{name}{suffix}", overwrite=True, show_progress=False, timeout=timeout)
except Exception as e: f"Couldn't download {url}."
with tempfile.TemporaryDirectory() as d:
d = Path(d)
url = "https://www.fast.ai/images/jh-head.jpg"
_download_image_inner(d, (125,url))
assert (d/'00000125.jpg').is_file()
with tempfile.TemporaryDirectory() as d:
d = Path(d)
url = "https://www.fast.ai/images/jh-head.jpg"
_download_image_inner(d, (125,url), preserve_filename=True)
assert (d/'jh-head.jpg').is_file()
assert not (d/'jh-head.jpg1').exists()
_download_image_inner(d, (125,url), preserve_filename=True)
assert (d/'jh-head.jpg').is_file()
assert (d/'jh-head1.jpg').is_file()
#export
def download_images(dest, url_file=None, urls=None, max_pics=1000, n_workers=8, timeout=4, preserve_filename=False):
"Download images listed in text file `url_file` to path `dest`, at most `max_pics`"
if urls is None: urls = url_file.read_text().strip().split("\n")[:max_pics]
dest = Path(dest)
dest.mkdir(exist_ok=True)
parallel(partial(_download_image_inner, dest, timeout=timeout, preserve_filename=preserve_filename),
list(enumerate(urls)), n_workers=n_workers, threadpool=True)
with tempfile.TemporaryDirectory() as d:
d = Path(d)
url_file = d/'urls.txt'
url_file.write_text("\n".join([f"https://www.fast.ai/images/{n}" for n in "jh-head.jpg thomas.JPG sg-head.jpg".split()]))
download_images(d, url_file)
for i in [0,2]: assert (d/f'0000000{i}.jpg').is_file()
assert (d/f'00000001.JPG').is_file()
with tempfile.TemporaryDirectory() as d:
d = Path(d)
url_file = d/'urls.txt'
url_file.write_text("\n".join([f"https://www.fast.ai/images/{n}" for n in "jh-head.jpg thomas.JPG sg-head.jpg".split()]))
download_images(d, url_file, preserve_filename=True)
assert (d/'jh-head.jpg').is_file()
assert (d/'thomas.JPG').is_file()
assert (d/'sg-head.jpg').is_file()
assert not (d/'jh-head1.jpg').exists()
assert not (d/'thomas1.JPG').exists()
assert not (d/'sg-head1.jpg').exists()
download_images(d, url_file, preserve_filename=True)
assert (d/'jh-head.jpg').is_file()
assert (d/'thomas.JPG').is_file()
assert (d/'sg-head.jpg').is_file()
assert (d/'jh-head1.jpg').is_file()
assert (d/'thomas1.JPG').is_file()
assert (d/'sg-head1.jpg').is_file()
#export
def resize_to(img, targ_sz, use_min=False):
"Size to resize to, to hit `targ_sz` at same aspect ratio, in PIL coords (i.e w*h)"
w,h = img.size
min_sz = (min if use_min else max)(w,h)
ratio = targ_sz/min_sz
return int(w*ratio),int(h*ratio)
class _FakeImg():
def __init__(self, size): self.size=size
img = _FakeImg((200,500))
test_eq(resize_to(img, 400), [160,400])
test_eq(resize_to(img, 400, use_min=True), [400,1000])
#export
def verify_image(fn):
"Confirm that `fn` can be opened"
try:
im = Image.open(fn)
im.draft(im.mode, (32,32))
im.load()
return True
except: return False
#export
def verify_images(fns):
"Find images in `fns` that can't be opened"
return L(fns[i] for i,o in enumerate(parallel(verify_image, fns)) if not o)
#export
def resize_image(file, dest, max_size=None, n_channels=3, ext=None,
img_format=None, resample=Image.BILINEAR, resume=False, **kwargs ):
"Resize file to dest to max_size"
dest = Path(dest)
dest_fname = dest/file.name
if resume and dest_fname.exists(): return
if verify_image(file):
img = Image.open(file)
imgarr = np.array(img)
img_channels = 1 if len(imgarr.shape) == 2 else imgarr.shape[2]
if (max_size is not None and (img.height > max_size or img.width > max_size)) or img_channels != n_channels:
if ext is not None: dest_fname=dest_fname.with_suffix(ext)
if max_size is not None:
new_sz = resize_to(img, max_size)
img = img.resize(new_sz, resample=resample)
if n_channels == 3: img = img.convert("RGB")
img.save(dest_fname, img_format, **kwargs)
file = Path('images/puppy.jpg')
dest = Path('.')
resize_image(file, max_size=400, dest=dest)
im = Image.open(dest/file.name)
test_eq(im.shape[1],400)
(dest/file.name).unlink()
#export
def resize_images(path, max_workers=defaults.cpus, max_size=None, recurse=False,
dest=Path('.'), n_channels=3, ext=None, img_format=None, resample=Image.BILINEAR,
resume=None, **kwargs):
"Resize files on path recursively to dest to max_size"
path = Path(path)
if resume is None and dest != Path('.'): resume=False
os.makedirs(dest, exist_ok=True)
files = get_image_files(path, recurse=recurse)
parallel(resize_image, files, max_workers=max_workers, max_size=max_size, dest=dest, n_channels=n_channels, ext=ext,
img_format=img_format, resample=resample, resume=resume, **kwargs)
with tempfile.TemporaryDirectory() as d:
dest = Path(d)/'resized_images'
resize_images('images', max_size=100, dest=dest)
```
# Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# T81-558: Applications of Deep Neural Networks
**Module 12: Deep Learning and Security**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module Video Material
Main video lecture:
* [Part 12.1: Security and Information Assurance with Deep Learning](https://www.youtube.com/watch?v=UI8HX5GzpGQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=35)
* [Part 12.2: Programming KDD99 with Keras TensorFlow, Intrusion Detection System (IDS)](https://www.youtube.com/watch?v=2PAFVKA-OWY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=36)
* Part 12.3: Security Project (coming soon)
# Helpful Functions
You will see these at the top of every module. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
```
import base64
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
from sklearn import preprocessing
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = f"{name}-{x}"
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = f"{name}-{tv}"
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(
target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df[result].values.astype(np.float32), dummies.values.astype(np.float32)
# Regression
return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean())
>= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
```
# The KDD-99 Dataset
The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning.
# Read in Raw KDD-99 Dataset
```
from keras.utils.data_utils import get_file
try:
path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz')
except:
print('Error downloading')
raise
print(path)
# This file is a CSV, just no CSV extension or headers
# Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
df = pd.read_csv(path, header=None)
print("Read {} rows.".format(len(df)))
# df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset
df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values)
# The CSV file has no column heads, so add them
df.columns = [
'duration',
'protocol_type',
'service',
'flag',
'src_bytes',
'dst_bytes',
'land',
'wrong_fragment',
'urgent',
'hot',
'num_failed_logins',
'logged_in',
'num_compromised',
'root_shell',
'su_attempted',
'num_root',
'num_file_creations',
'num_shells',
'num_access_files',
'num_outbound_cmds',
'is_host_login',
'is_guest_login',
'count',
'srv_count',
'serror_rate',
'srv_serror_rate',
'rerror_rate',
'srv_rerror_rate',
'same_srv_rate',
'diff_srv_rate',
'srv_diff_host_rate',
'dst_host_count',
'dst_host_srv_count',
'dst_host_same_srv_rate',
'dst_host_diff_srv_rate',
'dst_host_same_src_port_rate',
'dst_host_srv_diff_host_rate',
'dst_host_serror_rate',
'dst_host_srv_serror_rate',
'dst_host_rerror_rate',
'dst_host_srv_rerror_rate',
'outcome'
]
# display 5 rows
df[0:5]
```
# Analyzing a Dataset
The following script can be used to give a high-level overview of how a dataset appears.
```
ENCODING = 'utf-8'
def expand_categories(values):
result = []
s = values.value_counts()
t = float(len(values))
for v in s.index:
result.append("{}:{}%".format(v,round(100*(s[v]/t),2)))
return "[{}]".format(",".join(result))
def analyze(filename):
print()
print("Analyzing: {}".format(filename))
df = pd.read_csv(filename,encoding=ENCODING)
cols = df.columns.values
total = float(len(df))
print("{} rows".format(int(total)))
for col in cols:
uniques = df[col].unique()
unique_count = len(uniques)
if unique_count>100:
print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100)))
else:
print("** {}:{}".format(col,expand_categories(df[col])))
expand_categories(df[col])
# Analyze KDD-99
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
```
# Encode the feature vector
Encode every row in the database. This is not instant!
```
# Now encode the feature vector
encode_numeric_zscore(df, 'duration')
encode_text_dummy(df, 'protocol_type')
encode_text_dummy(df, 'service')
encode_text_dummy(df, 'flag')
encode_numeric_zscore(df, 'src_bytes')
encode_numeric_zscore(df, 'dst_bytes')
encode_text_dummy(df, 'land')
encode_numeric_zscore(df, 'wrong_fragment')
encode_numeric_zscore(df, 'urgent')
encode_numeric_zscore(df, 'hot')
encode_numeric_zscore(df, 'num_failed_logins')
encode_text_dummy(df, 'logged_in')
encode_numeric_zscore(df, 'num_compromised')
encode_numeric_zscore(df, 'root_shell')
encode_numeric_zscore(df, 'su_attempted')
encode_numeric_zscore(df, 'num_root')
encode_numeric_zscore(df, 'num_file_creations')
encode_numeric_zscore(df, 'num_shells')
encode_numeric_zscore(df, 'num_access_files')
encode_numeric_zscore(df, 'num_outbound_cmds')
encode_text_dummy(df, 'is_host_login')
encode_text_dummy(df, 'is_guest_login')
encode_numeric_zscore(df, 'count')
encode_numeric_zscore(df, 'srv_count')
encode_numeric_zscore(df, 'serror_rate')
encode_numeric_zscore(df, 'srv_serror_rate')
encode_numeric_zscore(df, 'rerror_rate')
encode_numeric_zscore(df, 'srv_rerror_rate')
encode_numeric_zscore(df, 'same_srv_rate')
encode_numeric_zscore(df, 'diff_srv_rate')
encode_numeric_zscore(df, 'srv_diff_host_rate')
encode_numeric_zscore(df, 'dst_host_count')
encode_numeric_zscore(df, 'dst_host_srv_count')
encode_numeric_zscore(df, 'dst_host_same_srv_rate')
encode_numeric_zscore(df, 'dst_host_diff_srv_rate')
encode_numeric_zscore(df, 'dst_host_same_src_port_rate')
encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate')
encode_numeric_zscore(df, 'dst_host_serror_rate')
encode_numeric_zscore(df, 'dst_host_srv_serror_rate')
encode_numeric_zscore(df, 'dst_host_rerror_rate')
encode_numeric_zscore(df, 'dst_host_srv_rerror_rate')
outcomes = encode_text_index(df, 'outcome')
num_classes = len(outcomes)
# display 5 rows
df.dropna(inplace=True,axis=1)
df[0:5]
# This is the numeric feature vector, as it goes to the neural net
```
# Train the Neural Network
```
import pandas as pd
import io
import requests
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn import metrics
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.callbacks import EarlyStopping
# Break into X (predictors) & y (prediction)
x, y = to_xy(df,'outcome')
# Create a test/train split. 25% test
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create neural net
model = Sequential()
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)
# Measure accuracy
pred = model.predict(x_test)
pred = np.argmax(pred,axis=1)
y_eval = np.argmax(y_test,axis=1)
score = metrics.accuracy_score(y_eval, pred)
print("Validation score: {}".format(score))
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Image captioning with visual attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/image_captioning">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/image_captioning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Given an image like the example below, our goal is to generate a caption such as "a surfer riding on a wave".

*[Image Source](https://commons.wikimedia.org/wiki/Surfing#/media/File:Surfing_in_Hawaii.jpg); License: Public Domain*
To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.

The model architecture is similar to [Show, Attend and Tell: Neural Image Caption Generation with Visual Attention](https://arxiv.org/abs/1502.03044).
This notebook is an end-to-end example. When you run the notebook, it downloads the [MS-COCO](http://cocodataset.org/#home) dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.
In this example, you will train a model on a relatively small amount of data—the first 30,000 captions for about 20,000 images (because there are multiple captions per image in the dataset).
```
import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# our model focuses on during captioning
import matplotlib.pyplot as plt
# Scikit-learn includes many helpful utilities
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import collections
import random
import re
import numpy as np
import os
import time
import json
from glob import glob
from PIL import Image
import pickle
```
## Download and prepare the MS-COCO dataset
You will use the [MS-COCO dataset](http://cocodataset.org/#home) to train our model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.
**Caution: large download ahead**. You'll use the training set, which is a 13GB file.
```
# Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract = True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/zips/train2014.zip',
extract = True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
```
## Optional: limit the size of the training set
To speed up training for this tutorial, you'll use a subset of 30,000 captions and their corresponding images to train our model. Choosing to use more data would result in improved captioning quality.
```
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Group all captions together having the same image ID.
image_path_to_caption = collections.defaultdict(list)
for val in annotations['annotations']:
caption = f"<start> {val['caption']} <end>"
image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (val['image_id'])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
random.shuffle(image_paths)
# Select the first 6000 image_paths from the shuffled set.
# Approximately each image id has 5 captions associated with it, so that will
# lead to 30,000 examples.
train_image_paths = image_paths[:6000]
print(len(train_image_paths))
train_captions = []
img_name_vector = []
for image_path in train_image_paths:
caption_list = image_path_to_caption[image_path]
train_captions.extend(caption_list)
img_name_vector.extend([image_path] * len(caption_list))
print(train_captions[0])
Image.open(img_name_vector[0])
```
## Preprocess the images using InceptionV3
Next, you will use InceptionV3 (which is pretrained on Imagenet) to classify each image. You will extract features from the last convolutional layer.
First, you will convert the images into InceptionV3's expected format by:
* Resizing the image to 299px by 299px
* [Preprocess the images](https://cloud.google.com/tpu/docs/inception-v3-advanced#preprocessing_stage) using the [preprocess_input](https://www.tensorflow.org/api_docs/python/tf/keras/applications/inception_v3/preprocess_input) method to normalize the image so that it contains pixels in the range of -1 to 1, which matches the format of the images used to train InceptionV3.
```
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
```
## Initialize InceptionV3 and load the pretrained Imagenet weights
Now you'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. The shape of the output of this layer is ```8x8x2048```. You use the last convolutional layer because you are using attention in this example. You don't perform this initialization during training because it could become a bottleneck.
* You forward each image through the network and store the resulting vector in a dictionary (image_name --> feature_vector).
* After all the images are passed through the network, you pickle the dictionary and save it to disk.
```
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
```
## Caching the features extracted from InceptionV3
You will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 \* 8 \* 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).
Performance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.
The caching will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you can:
1. install [tqdm](https://github.com/tqdm/tqdm):
`!pip install tqdm`
2. Import tqdm:
`from tqdm import tqdm`
3. Change the following line:
`for img, path in image_dataset:`
to:
`for img, path in tqdm(image_dataset):`
```
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
```
## Preprocess and tokenize the captions
* First, you'll tokenize the captions (for example, by splitting on spaces). This gives us a vocabulary of all of the unique words in the data (for example, "surfing", "football", and so on).
* Next, you'll limit the vocabulary size to the top 5,000 words (to save memory). You'll replace all other words with the token "UNK" (unknown).
* You then create word-to-index and index-to-word mappings.
* Finally, you pad all sequences to be the same length as the longest one.
```
# Find the maximum length of any caption in our dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# Choose the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ')
tokenizer.fit_on_texts(train_captions)
tokenizer.word_index['<pad>'] = 0
tokenizer.index_word[0] = '<pad>'
# Create the tokenized vectors
train_seqs = tokenizer.texts_to_sequences(train_captions)
# Pad each vector to the max_length of the captions
# If you do not provide a max_length value, pad_sequences calculates it automatically
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# Calculates the max_length, which is used to store the attention weights
max_length = calc_max_length(train_seqs)
```
## Split the data into training and testing
```
img_to_cap_vector = collections.defaultdict(list)
for img, cap in zip(img_name_vector, cap_vector):
img_to_cap_vector[img].append(cap)
# Create training and validation sets using an 80-20 split randomly.
img_keys = list(img_to_cap_vector.keys())
random.shuffle(img_keys)
slice_index = int(len(img_keys)*0.8)
img_name_train_keys, img_name_val_keys = img_keys[:slice_index], img_keys[slice_index:]
img_name_train = []
cap_train = []
for imgt in img_name_train_keys:
capt_len = len(img_to_cap_vector[imgt])
img_name_train.extend([imgt] * capt_len)
cap_train.extend(img_to_cap_vector[imgt])
img_name_val = []
cap_val = []
for imgv in img_name_val_keys:
capv_len = len(img_to_cap_vector[imgv])
img_name_val.extend([imgv] * capv_len)
cap_val.extend(img_to_cap_vector[imgv])
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
```
## Create a tf.data dataset for training
Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
```
# Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = top_k + 1
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
# Load the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int32]),
num_parallel_calls=tf.data.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
```
## Model
Fun fact: the decoder below is identical to the one in the example for [Neural Machine Translation with Attention](../sequences/nmt_with_attention.ipynb).
The model architecture is inspired by the [Show, Attend and Tell](https://arxiv.org/pdf/1502.03044.pdf) paper.
* In this example, you extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
* You squash that to a shape of (64, 2048).
* This vector is then passed through the CNN Encoder (which consists of a single Fully connected layer).
* The RNN (here GRU) attends over the image to predict the next word.
```
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# attention_hidden_layer shape == (batch_size, 64, units)
attention_hidden_layer = (tf.nn.tanh(self.W1(features) +
self.W2(hidden_with_time_axis)))
# score shape == (batch_size, 64, 1)
# This gives you an unnormalized score for each image feature.
score = self.V(attention_hidden_layer)
# attention_weights shape == (batch_size, 64, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it using pickle
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## Checkpoint
```
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer = optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
```
## Training
* You extract the features stored in the respective `.npy` files and then pass those features through the encoder.
* The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
* The decoder returns the predictions and the decoder hidden state.
* The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
* Use teacher forcing to decide the next input to the decoder.
* Teacher forcing is the technique where the target word is passed as the next input to the decoder.
* The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
```
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 20
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(
epoch + 1, batch, batch_loss.numpy() / int(target.shape[1])))
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print ('Epoch {} Loss {:.6f}'.format(epoch + 1,
total_loss/num_steps))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
```
## Caption!
* The evaluate function is similar to the training loop, except you don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
* Stop predicting when the model predicts the end token.
* And store the attention weights for every time step.
```
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input, features, hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()
result.append(tokenizer.index_word[predicted_id])
if tokenizer.index_word[predicted_id] == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for l in range(len_result):
temp_att = np.resize(attention_plot[l], (8, 8))
ax = fig.add_subplot(len_result//2, len_result//2, l+1)
ax.set_title(result[l])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([tokenizer.index_word[i] for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print ('Real Caption:', real_caption)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
```
## Try it on your own images
For fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)
```
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension,
origin=image_url)
result, attention_plot = evaluate(image_path)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
```
# Next steps
Congrats! You've just trained an image captioning model with attention. Next, take a look at this example [Neural Machine Translation with Attention](../sequences/nmt_with_attention.ipynb). It uses a similar architecture to translate between Spanish and English sentences. You can also experiment with training the code in this notebook on a different dataset.
| github_jupyter |
# Connecting to a PostgreSQL database
In these exercises, you will be working with real databases hosted on the cloud via Amazon Web Services (AWS)!
Let's begin by connecting to a PostgreSQL database. When connecting to a PostgreSQL database, many prefer to use the psycopg2 database driver as it supports practically all of PostgreSQL's features efficiently and is the standard dialect for PostgreSQL in SQLAlchemy.
You might recall from Chapter 1 that we use the create_engine() function and a connection string to connect to a database. In general, connection strings have the form "dialect+driver://username:password@host:port/database"
There are three components to the connection string in this exercise: the dialect and driver ('postgresql+psycopg2://'), followed by the username and password ('student:datacamp'), followed by the host and port ('@postgresql.csrrinzqubik.us-east-1.rds.amazonaws.com:5432/'), and finally, the database name ('census'). You will have to pass this string as an argument to create_engine() in order to connect to the database.
```
# Import create_engine function
from sqlalchemy import create_engine
# Create an engine to the census database
engine = create_engine('postgresql+psycopg2://student:datacamp@postgresql.csrrinzqubik.us-east-1.rds.amazonaws.com:5432/census')
# Use the .table_names() method on the engine to print the table names
print(engine.table_names())
```
# Filter data selected from a Table - Simple
Having connected to the database, it's now time to practice filtering your queries!
As mentioned in the video, a where() clause is used to filter the data that a statement returns. For example, to select all the records from the census table where the sex is Female (or 'F') we would do the following:
select([census]).where(census.columns.sex == 'F')
In addition to == we can use basically any python comparison operator (such as <=, !=, etc) in the where() clause.
```
# Import create_engine function
from sqlalchemy import create_engine, select, Table, MetaData
# Create an engine to the census database
engine = create_engine('postgresql+psycopg2://' + 'student:datacamp' + '@courses.csrrinzqubik.us-east-1.rds.amazonaws.com:3306/' + 'census')
# Create a connection on engine
connection = engine.connect()
metadata = MetaData()
census = Table('census', metadata, autoload=True, autoload_with=engine)
# Create a select query: stmt
stmt = select([census])
# Add a where clause to filter the results to only those for New York : stmt_filtered
stmt_filtered = stmt.where(
census.columns.state == 'New York'
)
# Execute the query to retrieve all the data returned: results
results = connection.execute(stmt_filtered).fetchall()
# Loop over the results and print the age, sex, and pop2000
for result in results:
print(result.age, result.sex, result.pop2000)
```
# Filter data selected from a Table - Expressions
In addition to standard Python comparators, we can also use methods such as in_() to create more powerful where() clauses. You can see a full list of expressions in the SQLAlchemy Documentation.
Method in_(), when used on a column, allows us to include records where the value of a column is among a list of possible values. For example, where(census.columns.age.in_([20, 30, 40])) will return only records for people who are exactly 20, 30, or 40 years old.
In this exercise, you will continue working with the census table, and select the records for people from the three most densely populated states. The list of those states has already been created for you.
```
# Define a list of states for which we want results
states = ['New York', 'California', 'Texas']
# Create a query for the census table: stmt
stmt = select([census])
# Append a where clause to match all the states in_ the list states
stmt = stmt.where(census.columns.state.in_(states))
# Loop over the ResultProxy and print the state and its population in 2000
for state in connection.execute(stmt).fetchall():
print(state.state, state.pop2000)
```
# Filter data selected from a Table - Advanced
You're really getting the hang of this! SQLAlchemy also allows users to use conjunctions such as and_(), or_(), and not_() to build more complex filtering. For example, we can get a set of records for people in New York who are 21 or 37 years old with the following code:
```
select([census]).where(
and_(census.columns.state == 'New York',
or_(census.columns.age == 21,
census.columns.age == 37
)
)
)
```
An equivalent SQL statement would be,for example,
```
SELECT * FROM census WHERE state = 'New York' AND (age = 21 OR age = 37)
```
```
# Import and_
from sqlalchemy import and_
# Build a query for the census table: stmt
stmt = select([census])
# Append a where clause to select only non-male records from California using and_
stmt = stmt.where(
# The state of California with a non-male sex
and_(census.columns.state == 'California',
census.columns.sex != 'M'
)
)
# Loop over the ResultProxy printing the age and sex
for result in connection.execute(stmt).fetchall():
print(result.age, result.sex)
```
# Ordering by a single column
To sort the result output by a field, we use the .order_by() method. By default, the .order_by() method sorts from lowest to highest on the supplied column. You just have to pass in the name of the column you want sorted to .order_by().
In the video, for example, Jason used stmt.order_by(census.columns.state) to sort the result output by the state column.
```
# Build a query to select the state column: stmt
stmt = select([census.columns.state])
# Order stmt by the state column
stmt = stmt.order_by(census.columns.state)
# Execute the query and store the results: results
results = connection.execute(stmt).fetchall()
# Print the first 10 results
print(results[:10])
```
# Ordering in descending order by a single column
You can also use .order_by() to sort from highest to lowest by wrapping a column in the desc() function. Although you haven't seen this function in action, it generalizes what you have already learned.
Pass desc() (for "descending") inside an .order_by() with the name of the column you want to sort by. For instance, stmt.order_by(desc(table.columns.column_name)) sorts column_name in descending order.
```
# Import desc
from sqlalchemy import desc
# Build a query to select the state column: stmt
stmt = select([census.columns.state])
# Order stmt by state in descending order: rev_stmt
rev_stmt = stmt.order_by(desc(census.columns.state))
# Execute the query and store the results: rev_results
rev_results = connection.execute(rev_stmt).fetchall()
# Print the first 10 rev_results
print(rev_results[:10])
# Build a query to select state and age: stmt
stmt = select([census.columns.state, census.columns.age])
# Append order by to ascend by state and descend by age
stmt = stmt.order_by(census.columns.state, desc(census.columns.age))
# Execute the statement and store all the records: results
results = connection.execute(stmt).fetchall()
# Print the first 20 results
print(results[:20])
```
# Counting distinct data
As mentioned in the video, SQLAlchemy's func module provides access to built-in SQL functions that can make operations like counting and summing faster and more efficient.
In the video, Jason used func.sum() to get a sum of the pop2008 column of census as shown below:
```
select([func.sum(census.columns.pop2008)])
```
If instead you want to count the number of values in pop2008, you could use func.count() like this:
```
select([func.count(census.columns.pop2008)])
```
Furthermore, if you only want to count the distinct values of pop2008, you can use the .distinct() method:
```
select([func.count(census.columns.pop2008.distinct())])
```
In this exercise, you will practice using func.count() and .distinct() to get a count of the distinct number of states in census.
So far, you've seen .fetchall(), .fetchmany(), and .first() used on a ResultProxy to get the results. The ResultProxy also has a method called .scalar() for getting just the value of a query that returns only one row and column.
This can be very useful when you are querying for just a count or sum.
```
from sqlalchemy import func
# Build a query to count the distinct states values: stmt
stmt = select([func.count(census.columns.state.distinct())])
# Execute the query and store the scalar result: distinct_state_count
distinct_state_count = connection.execute(stmt).scalar()
# Print the distinct_state_count
print(distinct_state_count)
```
# Count of records by state
Often, we want to get a count for each record with a particular value in another column. The .group_by() method helps answer this type of query. You can pass a column to the .group_by() method and use in an aggregate function like sum() or count(). Much like the .order_by() method, .group_by() can take multiple columns as arguments.
```
# Import func
from sqlalchemy import func
# Build a query to select the state and count of ages by state: stmt
stmt = select([census.columns.state, func.count(census.columns.age)])
# Group stmt by state
stmt = stmt.group_by(census.columns.state)
# Execute the statement and store all the records: results
results = connection.execute(stmt).fetchall()
# Print results
print(results)
# Print the keys/column names of the results returned
print(results[0].keys())
```
# Determining the population sum by state
To avoid confusion with query result column names like count_1, we can use the .label() method to provide a name for the resulting column. This gets appended to the function method we are using, and its argument is the name we want to use.
We can pair func.sum() with .group_by() to get a sum of the population by State and use the label() method to name the output.
We can also create the func.sum() expression before using it in the select statement. We do it the same way we would inside the select statement and store it in a variable. Then we use that variable in the select statement where the func.sum() would normally be.
```
# Import func
from sqlalchemy import func
# Build an expression to calculate the sum of pop2008 labeled as population
pop2008_sum = func.sum(census.columns.pop2008).label('population')
# Build a query to select the state and sum of pop2008: stmt
stmt = select([census.columns.state, pop2008_sum])
# Group stmt by state
stmt = stmt.group_by(census.columns.state)
# Execute the statement and store all the records: results
results = connection.execute(stmt).fetchall()
# Print results
print(results)
# Print the keys/column names of the results returned
print(results[0].keys())
```
# ResultsSets and pandas DataFrames
We can feed a ResultSet directly into a pandas DataFrame, which is the workhorse of many Data Scientists in PythonLand. Jason demonstrated this in the video. In this exercise, you'll follow exactly the same approach to convert a ResultSet into a DataFrame.
```
# import pandas
import pandas as pd
# Create a DataFrame from the results: df
df = pd.DataFrame(results)
# Set column names
df.columns = results[0].keys()
# Print the DataFrame
print(df)
```
# From SQLAlchemy results to a plot
We can also take advantage of pandas and Matplotlib to build figures of our data. Remember that data visualization is essential for both exploratory data analysis and communication of your data!
```
# Import pyplot as plt from matplotlib
import matplotlib.pyplot as plt
# Create a DataFrame from the results: df
df = pd.DataFrame(results)
# Set Column names
df.columns = results[0].keys()
# Print the DataFrame
df.plot.bar()
# Plot the DataFrame
plt.show()
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*The Vaidman Detection Test: Interaction Free Measurement*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.
***
### Contributors
Alex Breitweiser
***
### Qiskit Package Versions
```
import qiskit
qiskit.__qiskit_version__
```
## Introduction
One surprising result of quantum mechanics is the ability to measure something without ever directly "observing" it. This interaction-free measurement cannot be reproduced in classical mechanics. The prototypical example is the [Elitzur–Vaidman Bomb Experiment](https://en.wikipedia.org/wiki/Elitzur%E2%80%93Vaidman_bomb_tester) - in which one wants to test whether bombs are active without detonating them. In this example we will test whether an unknown operation is null (the identity) or an X gate, corresponding to a dud or a live bomb.
### The Algorithm
The algorithm will use two qubits, $q_1$ and $q_2$, as well as a small parameter, $\epsilon = \frac{\pi}{n}$ for some integer $n$. Call the unknown gate, which is either the identity or an X gate, $G$, and assume we have it in a controlled form. The algorithm is then:
1. Start with both $q_1$ and $q_2$ in the $|0\rangle$ state
2. Rotate $q_1$ by $\epsilon$ about the Y axis
3. Apply a controlled $G$ on $q_2$, conditioned on $q_1$
4. Measure $q_2$
5. Repeat (2-4) $n$ times
6. Measure $q_1$

### Explanation and proof of correctness
There are two cases: Either the gate is the identity (a dud), or it is an X gate (a live bomb).
#### Case 1: Dud
After rotation, $q_1$ is now approximately
$$q_1 \approx |0\rangle + \frac{\epsilon}{2} |1\rangle$$
Since the unknown gate is the identity, the controlled gate leaves the two qubit state separable,
$$q_1 \times q_2 \approx (|0\rangle + \frac{\epsilon}{2} |1\rangle) \times |0\rangle$$
and measurement is trivial (we will always measure $|0\rangle$ for $q_2$).
Repetition will not change this result - we will always keep separability and $q_2$ will remain in $|0\rangle$.
After n steps, $q_1$ will flip by $\pi$ to $|1\rangle$, and so measuring it will certainly yield $1$. Therefore, the output register for a dud bomb will read:
$$000...01$$
#### Case 2: Live
Again, after rotation, $q_1$ is now approximately
$$q_1 \approx |0\rangle + \frac{\epsilon}{2} |1\rangle$$
But, since the unknown gate is now an X gate, the combined state after $G$ is now
$$q_1 \times q_2 \approx |00\rangle + \frac{\epsilon}{2} |11\rangle$$
Measuring $q_2$ now might yield $1$, in which case we have "measured" the live bomb (obtained a result which differs from that of a dud) and it explodes. However, this only happens with a probability proportional to $\epsilon^2$. In the vast majority of cases, we will measure $0$ and the entire system will collapse back to
$$q_1 \times q_2 = |00\rangle$$
After every step, the system will most likely return to the original state, and the final measurement of $q_1$ will yield $0$. Therefore, the most likely outcome of a live bomb is
$$000...00$$
which will identify a live bomb without ever "measuring" it. If we ever obtain a 1 in the bits preceding the final bit, we will have detonated the bomb, but this will only happen with probability of order
$$P \propto n \epsilon^2 \propto \epsilon$$
This probability may be made arbitrarily small at the cost of an arbitrarily long circuit.
## Generating Random Bombs
A test set must be generated to experiment on - this can be done by classical (pseudo)random number generation, but as long as we have access to a quantum computer we might as well take advantage of the ability to generate true randomness.
```
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from collections import Counter #Use this to convert results from list to dict for histogram
# importing QISKit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import execute, Aer, IBMQ
from qiskit.providers.ibmq import least_busy
from qiskit.tools.visualization import plot_histogram
# To use IBMQ Quantum Experience
IBMQ.load_accounts()
```
We will generate a test set of 50 "bombs", and each "bomb" will be run through a 20-step measurement circuit. We set up the program as explained in previous examples.
```
# Use local qasm simulator
backend = Aer.get_backend('qasm_simulator')
# Use the IBMQ Quantum Experience
# backend = least_busy(IBMQ.backends())
N = 50 # Number of bombs
steps = 20 # Number of steps for the algorithm, limited by maximum circuit depth
eps = np.pi / steps # Algorithm parameter, small
# Prototype circuit for bomb generation
q_gen = QuantumRegister(1, name='q_gen')
c_gen = ClassicalRegister(1, name='c_gen')
IFM_gen = QuantumCircuit(q_gen, c_gen, name='IFM_gen')
# Prototype circuit for bomb measurement
q = QuantumRegister(2, name='q')
c = ClassicalRegister(steps+1, name='c')
IFM_meas = QuantumCircuit(q, c, name='IFM_meas')
```
Generating a random bomb is achieved by simply applying a Hadamard gate to a $q_1$, which starts in $|0\rangle$, and then measuring. This randomly gives a $0$ or $1$, each with equal probability. We run one such circuit for each bomb, since circuits are currently limited to a single measurement.
```
# Quantum circuits to generate bombs
qc = []
circuits = ["IFM_gen"+str(i) for i in range(N)]
# NB: Can't have more than one measurement per circuit
for circuit in circuits:
IFM = QuantumCircuit(q_gen, c_gen, name=circuit)
IFM.h(q_gen[0]) #Turn the qubit into |0> + |1>
IFM.measure(q_gen[0], c_gen[0])
qc.append(IFM)
_ = [i.qasm() for i in qc] # Suppress the output
```
Note that, since we want to measure several discrete instances, we do *not* want to average over multiple shots. Averaging would yield partial bombs, but we assume bombs are discretely either live or dead.
```
result = execute(qc, backend=backend, shots=1).result() # Note that we only want one shot
bombs = []
for circuit in qc:
for key in result.get_counts(circuit): # Hack, there should only be one key, since there was only one shot
bombs.append(int(key))
#print(', '.join(('Live' if bomb else 'Dud' for bomb in bombs))) # Uncomment to print out "truth" of bombs
plot_histogram(Counter(('Live' if bomb else 'Dud' for bomb in bombs))) #Plotting bomb generation results
```
## Testing the Bombs
Here we implement the algorithm described above to measure the bombs. As with the generation of the bombs, it is currently impossible to take several measurements in a single circuit - therefore, it must be run on the simulator.
```
# Use local qasm simulator
backend = Aer.get_backend('qasm_simulator')
qc = []
circuits = ["IFM_meas"+str(i) for i in range(N)]
#Creating one measurement circuit for each bomb
for i in range(N):
bomb = bombs[i]
IFM = QuantumCircuit(q, c, name=circuits[i])
for step in range(steps):
IFM.ry(eps, q[0]) #First we rotate the control qubit by epsilon
if bomb: #If the bomb is live, the gate is a controlled X gate
IFM.cx(q[0],q[1])
#If the bomb is a dud, the gate is a controlled identity gate, which does nothing
IFM.measure(q[1], c[step]) #Now we measure to collapse the combined state
IFM.measure(q[0], c[steps])
qc.append(IFM)
_ = [i.qasm() for i in qc] # Suppress the output
result = execute(qc, backend=backend, shots=1, max_credits=5).result()
def get_status(counts):
# Return whether a bomb was a dud, was live but detonated, or was live and undetonated
# Note that registers are returned in reversed order
for key in counts:
if '1' in key[1:]:
#If we ever measure a '1' from the measurement qubit (q1), the bomb was measured and will detonate
return '!!BOOM!!'
elif key[0] == '1':
#If the control qubit (q0) was rotated to '1', the state never entangled because the bomb was a dud
return 'Dud'
else:
#If we only measured '0' for both the control and measurement qubit, the bomb was live but never set off
return 'Live'
results = {'Live': 0, 'Dud': 0, "!!BOOM!!": 0}
for circuit in qc:
status = get_status(result.get_counts(circuit))
results[status] += 1
plot_histogram(results)
```
| github_jupyter |
# Chapter 9 - Searching Data Structures and Finding Shortest Paths
This notebook contains code accompanying Chapter 9 Searching Data Structures and Finding Shortest Paths in *Practical Discrete Mathematics* by Ryan T. White and Archana Tikayat Ray
For most of the code in the chapter, we need to import the `NumPy` library.
```
import numpy
```
## A Python Implementation of DFS
```
# Depth First Search
#
# INPUTS
# A - an adjacency matrix. It should be square, symmetric, and binary
# source - the number of the source vertex
#
# OUTPUTS
# vertexList - an ordered list of vertices found in the search
def DFS(A, source):
# reduce the source by 1 to avoid off-by-1 errors
source -= 1
# find the number of vertices
n = A.shape[0]
# initialize the unvisited vertex set to be full
unvisited = [1] * n
# initialize a queue with the source vertex
stack = [source]
# initialize the vertex list
vertexList = []
# while the stack is not empty
while stack:
# remove the just-visited vertex from the stack and store it
v = stack.pop()
# if v is unvisited, add it to our list and mark it as visited
if unvisited[v]:
# save and print the number of the newly visited vertex
vertexList.append(v)
# mark the vertex as visited
unvisited[v] = 0
# iterate through the vertices
for u in range(n - 1, 0, -1):
# add each unvisited neighbor to the stack
if A[v,u] == 1 and unvisited[u] == 1:
stack.append(u)
# return the vertex list found by DFS
return vertexList
```
Let's save the adjacency matrix for the graph in Figure 9.1.
```
# Save the adjacency matrix for the graph in Figure 9.1
A = numpy.array([[0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```
Next, let's run DFS on the graph starting with source vertex 1.
```
# Run DFS on the graph with adjacency matrix A and source 1
vertexList = DFS(A,1)
# Add 1 to the vertex numbers
[x + 1 for x in vertexList]
```
## Shortest Paths on Networks
The following function checks if a path exists between vertex i and vertex j.
```
# create a function that returns True if vertex i and vertex j are
# connected in the graph represented by the input adjacency matrix A
def isConnected(A, i, j):
# initialize the paths matrix to adjacency matrix A
paths = A
# find the number of vertices in the graph
numberOfVertices = A.shape[0]
# find the number of edges in the graph
numberOfEdges = numpy.sum(A)/2
# if vi and vj are adjacent, return True
if paths[i-1][j-1] > 0:
print('Vertex', i, 'and vertex', j, 'are adjacent')
return True
else:
# run the loop until we find a path
for pathLength in range(2, numberOfVertices):
# exponentiate the adjacency matrix
paths = numpy.dot(paths, A)
# if the element in row i, column j is more than 0, we
# found a path
if paths[i-1][j-1] > 0:
print('There is a path with', pathLength,
'edges from vertex', i, 'to vertex', j)
return True
# found no paths, the vertices are not connected
if pathLength == numberOfEdges:
print('There are no paths from vertex', i, 'to vertex', j)
return False
```
Let's create adjacency matrices for the graphs in Figure 9.6 and test our function.
```
# create an adjacency matrix for the graph G1
A1 = numpy.array([[0, 1, 1, 0, 1, 0], [1, 0, 1, 1, 0, 1],
[1, 1, 0, 1, 1, 0], [0, 1, 1, 0, 1, 0],
[1, 0, 1, 1, 0, 0], [0, 1, 0, 0, 0, 0]])
# check if various vertices are connected
print(isConnected(A1, 1, 4))
print(isConnected(A1, 2, 3))
print(isConnected(A1, 5, 6))
# create an adjacency matrix for graph G2
A2 = numpy.array([[0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 1, 0], [0, 0, 1, 0, 1, 0],
[0, 0, 1, 1, 0, 0], [0, 1, 0, 0, 0, 0]])
print(isConnected(A2, 1, 6))
print(isConnected(A2, 2, 5))
print(isConnected(A2, 1, 4))
```
## Python Implementation of Dijkstra’s Algorithm
```
import numpy
# Dijkstra's algorithm for finding shortest paths from the source
# vertex to all other vertices in the graph
#
# INPUTS
# W - a weight matrix. It should be a square matrix
# i - the number of the source node
#
# OUTPUTS
# shortestDistances - the shortest distances from the source to each
# vertex
# previousVertices - the previous vertex to the destination in a
# shortest path from the source to a destination
def Dijkstra(W, i):
# find the number of vertices
n = W.shape[0]
# initialize the shortest distances to infinity
shortestDistances = numpy.array([numpy.inf] * n)
# initialize the previous vertices
previousVertices = numpy.array([numpy.inf] * n)
# initialize the unvisited vertex set to be full
unvisited = numpy.array([1] * n)
# mark the source as visited
unvisited[i - 1] = 0
# initialize distance from the source to the source as 0
shortestDistances[i - 1] = 0
# loop for iteration per vertex until the unvisited set is empty
for _ in range(n):
# find the distances to all unvisited adjacent vertices and
# set others to 0
distances = shortestDistances * unvisited
# find the index of the nearest unvisited vertex (where
# distances > 0)
x = numpy.argmin(numpy.ma.masked_where(
distances == 0, distances))
# mark vertex x as visited
unvisited[x] = 0
# iterate through the vertices
for v in range(n):
oldDistance = shortestDistances[v]
newDistance = shortestDistances[x] + W[v,x]
adjacent = W[v,x] > 0
unvis = unvisited[v]
# if v and x are connected, v has not been visited, and we
# find a shorter distance to node v...
if adjacent and unvis and oldDistance > newDistance:
# save the shortest distance found so far
shortestDistances[v] = shortestDistances[x] + W[v,x]
# save the previous vertex
previousVertices[v] = x
# print the table similar to the book
print(numpy.array([numpy.arange(n) + 1, shortestDistances,
previousVertices + 1]).T)
# return the outputs
return shortestDistances, previousVertices
```
Let's create the weight matrix for the network in Figure 9.15 and run Dijkstra's algorithms to find the shortest paths from $v_1$ to all other vertices.
```
# Create a weight matrix for the network in Figure 9.15
W1 = numpy.array([[0, 4, 1, 0, 2, 0],
[4, 0, 2, 1, 0, 1],
[1, 2, 0, 1, 1, 0],
[0, 1, 1, 0, 2, 0],
[2, 0, 1, 2, 0, 0],
[0, 1, 0, 0, 0, 0]])
# Run Dijkstra's algorithm with a source at vertex v1
shortestDistances, previousVertices = Dijkstra(W1, 1)
```
Next, we write a function to clean up the outputs and display the paths
```
# Use the previousVertices chart to construct the shortest path from
# input source to input destination and print a string showing
# showing the path
def printShortestPath(shortestDistances, previousVertices, source,
destination):
# avoid off-by-one error
source -= 1
destination -= 1
# convert previousVertices to integers
previousVertices = previousVertices.astype(int)
# initialize the path with the destination
path = [destination]
# add the previous vertex from previousVertices until we reach
# the source
for _ in range(previousVertices.shape[0] - 1):
# if the source is in the path, stop
if path[-1] == source:
break
# if the source is not in the path, add the previous vertex
else:
path.append(previousVertices[path[-1]])
# initialize an output string
output = []
# iterate through the path backwards (source to destination)
for i in numpy.flip(path):
# construct a list of strings to output
if i > 0:
output.append('->')
output.append('v' + str(i + 1))
# print the strings with no spaces
print('Path =', *output, '\t\t Distance =',
shortestDistances[destination])
```
And, we run it to find:
```
for i in range(2,7):
printShortestPath(shortestDistances, previousVertices, 1, i)
```
Next, we try a network that we know is not connected. First, we will check if the vertices in question are connected.
```
# find the shortest paths to connected vertices
def distancesWithinComponent(source):
# initialize the connected component
component = [source]
# construct the connected component
for i in range(1, W2.shape[0] + 1):
if i != source and isConnected(W2, source, i):
component.append(i)
# find the weight matrix correponding to the connected component
subnetwork = W2[numpy.array(component) - 1,:][:,numpy.array(component) - 1]
# run Dijkstra's algorithm
return Dijkstra(subnetwork, 1)
```
Let's find the paths from $v_1$.
```
# Create a weight matrix for the network in Figure 9.16
W2 = numpy.array([[0, 4, 0, 0, 0, 0],
[4, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 4, 0],
[0, 0, 1, 0, 2, 0],
[0, 0, 4, 2, 0, 0],
[0, 1, 0, 0, 0, 0]])
distancesWithinComponent(1)
```
Let's find the paths from $v_3$.
```
distancesWithinComponent(3)
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [In Depth: Naive Bayes Classification](05.05-Naive-Bayes.ipynb) | [Contents](Index.ipynb) | [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.06-Linear-Regression.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# In Depth: Linear Regression
Just as naive Bayes (discussed earlier in [In Depth: Naive Bayes Classification](05.05-Naive-Bayes.ipynb)) is a good starting point for classification tasks, linear regression models are a good starting point for regression tasks.
Such models are popular because they can be fit very quickly, and are very interpretable.
You are probably familiar with the simplest form of a linear regression model (i.e., fitting a straight line to data) but such models can be extended to model more complicated data behavior.
In this section we will start with a quick intuitive walk-through of the mathematics behind this well-known problem, before seeing how before moving on to see how linear models can be generalized to account for more complicated patterns in data.
We begin with the standard imports:
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
```
## Simple Linear Regression
We will start with the most familiar linear regression, a straight-line fit to data.
A straight-line fit is a model of the form
$$
y = ax + b
$$
where $a$ is commonly known as the *slope*, and $b$ is commonly known as the *intercept*.
Consider the following data, which is scattered about a line with a slope of 2 and an intercept of -5:
```
rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = 2 * x - 5 + rng.randn(50)
plt.scatter(x, y);
```
We can use Scikit-Learn's ``LinearRegression`` estimator to fit this data and construct the best-fit line:
```
from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model.fit(x[:, np.newaxis], y)
xfit = np.linspace(0, 10, 1000)
yfit = model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit);
```
The slope and intercept of the data are contained in the model's fit parameters, which in Scikit-Learn are always marked by a trailing underscore.
Here the relevant parameters are ``coef_`` and ``intercept_``:
```
print("Model slope: ", model.coef_[0])
print("Model intercept:", model.intercept_)
```
We see that the results are very close to the inputs, as we might hope.
The ``LinearRegression`` estimator is much more capable than this, however—in addition to simple straight-line fits, it can also handle multidimensional linear models of the form
$$
y = a_0 + a_1 x_1 + a_2 x_2 + \cdots
$$
where there are multiple $x$ values.
Geometrically, this is akin to fitting a plane to points in three dimensions, or fitting a hyper-plane to points in higher dimensions.
The multidimensional nature of such regressions makes them more difficult to visualize, but we can see one of these fits in action by building some example data, using NumPy's matrix multiplication operator:
```
rng = np.random.RandomState(1)
X = 10 * rng.rand(100, 3)
y = 0.5 + np.dot(X, [1.5, -2., 1.])
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
```
Here the $y$ data is constructed from three random $x$ values, and the linear regression recovers the coefficients used to construct the data.
In this way, we can use the single ``LinearRegression`` estimator to fit lines, planes, or hyperplanes to our data.
It still appears that this approach would be limited to strictly linear relationships between variables, but it turns out we can relax this as well.
## Basis Function Regression
One trick you can use to adapt linear regression to nonlinear relationships between variables is to transform the data according to *basis functions*.
We have seen one version of this before, in the ``PolynomialRegression`` pipeline used in [Hyperparameters and Model Validation](05.03-Hyperparameters-and-Model-Validation.ipynb) and [Feature Engineering](05.04-Feature-Engineering.ipynb).
The idea is to take our multidimensional linear model:
$$
y = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + \cdots
$$
and build the $x_1, x_2, x_3,$ and so on, from our single-dimensional input $x$.
That is, we let $x_n = f_n(x)$, where $f_n()$ is some function that transforms our data.
For example, if $f_n(x) = x^n$, our model becomes a polynomial regression:
$$
y = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots
$$
Notice that this is *still a linear model*—the linearity refers to the fact that the coefficients $a_n$ never multiply or divide each other.
What we have effectively done is taken our one-dimensional $x$ values and projected them into a higher dimension, so that a linear fit can fit more complicated relationships between $x$ and $y$.
### Polynomial basis functions
This polynomial projection is useful enough that it is built into Scikit-Learn, using the ``PolynomialFeatures`` transformer:
```
from sklearn.preprocessing import PolynomialFeatures
x = np.array([2, 3, 4])
poly = PolynomialFeatures(3, include_bias=False)
poly.fit_transform(x[:, None])
```
We see here that the transformer has converted our one-dimensional array into a three-dimensional array by taking the exponent of each value.
This new, higher-dimensional data representation can then be plugged into a linear regression.
As we saw in [Feature Engineering](05.04-Feature-Engineering.ipynb), the cleanest way to accomplish this is to use a pipeline.
Let's make a 7th-degree polynomial model in this way:
```
from sklearn.pipeline import make_pipeline
poly_model = make_pipeline(PolynomialFeatures(7),
LinearRegression())
```
With this transform in place, we can use the linear model to fit much more complicated relationships between $x$ and $y$.
For example, here is a sine wave with noise:
```
rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = np.sin(x) + 0.1 * rng.randn(50)
poly_model.fit(x[:, np.newaxis], y)
yfit = poly_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit);
```
Our linear model, through the use of 7th-order polynomial basis functions, can provide an excellent fit to this non-linear data!
### Gaussian basis functions
Of course, other basis functions are possible.
For example, one useful pattern is to fit a model that is not a sum of polynomial bases, but a sum of Gaussian bases.
The result might look something like the following figure:

[figure source in Appendix](#Gaussian-Basis)
The shaded regions in the plot are the scaled basis functions, and when added together they reproduce the smooth curve through the data.
These Gaussian basis functions are not built into Scikit-Learn, but we can write a custom transformer that will create them, as shown here and illustrated in the following figure (Scikit-Learn transformers are implemented as Python classes; reading Scikit-Learn's source is a good way to see how they can be created):
```
from sklearn.base import BaseEstimator, TransformerMixin
class GaussianFeatures(BaseEstimator, TransformerMixin):
"""Uniformly spaced Gaussian features for one-dimensional input"""
def __init__(self, N, width_factor=2.0):
self.N = N
self.width_factor = width_factor
@staticmethod
def _gauss_basis(x, y, width, axis=None):
arg = (x - y) / width
return np.exp(-0.5 * np.sum(arg ** 2, axis))
def fit(self, X, y=None):
# create N centers spread along the data range
self.centers_ = np.linspace(X.min(), X.max(), self.N)
self.width_ = self.width_factor * (self.centers_[1] - self.centers_[0])
return self
def transform(self, X):
return self._gauss_basis(X[:, :, np.newaxis], self.centers_,
self.width_, axis=1)
gauss_model = make_pipeline(GaussianFeatures(20),
LinearRegression())
gauss_model.fit(x[:, np.newaxis], y)
yfit = gauss_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit)
plt.xlim(0, 10);
```
We put this example here just to make clear that there is nothing magic about polynomial basis functions: if you have some sort of intuition into the generating process of your data that makes you think one basis or another might be appropriate, you can use them as well.
## Regularization
The introduction of basis functions into our linear regression makes the model much more flexible, but it also can very quickly lead to over-fitting (refer back to [Hyperparameters and Model Validation](05.03-Hyperparameters-and-Model-Validation.ipynb) for a discussion of this).
For example, if we choose too many Gaussian basis functions, we end up with results that don't look so good:
```
model = make_pipeline(GaussianFeatures(30),
LinearRegression())
model.fit(x[:, np.newaxis], y)
plt.scatter(x, y)
plt.plot(xfit, model.predict(xfit[:, np.newaxis]))
plt.xlim(0, 10)
plt.ylim(-1.5, 1.5);
```
With the data projected to the 30-dimensional basis, the model has far too much flexibility and goes to extreme values between locations where it is constrained by data.
We can see the reason for this if we plot the coefficients of the Gaussian bases with respect to their locations:
```
def basis_plot(model, title=None):
fig, ax = plt.subplots(2, sharex=True)
model.fit(x[:, np.newaxis], y)
ax[0].scatter(x, y)
ax[0].plot(xfit, model.predict(xfit[:, np.newaxis]))
ax[0].set(xlabel='x', ylabel='y', ylim=(-1.5, 1.5))
if title:
ax[0].set_title(title)
ax[1].plot(model.steps[0][1].centers_,
model.steps[1][1].coef_)
ax[1].set(xlabel='basis location',
ylabel='coefficient',
xlim=(0, 10))
model = make_pipeline(GaussianFeatures(30), LinearRegression())
basis_plot(model)
```
The lower panel of this figure shows the amplitude of the basis function at each location.
This is typical over-fitting behavior when basis functions overlap: the coefficients of adjacent basis functions blow up and cancel each other out.
We know that such behavior is problematic, and it would be nice if we could limit such spikes expliticly in the model by penalizing large values of the model parameters.
Such a penalty is known as *regularization*, and comes in several forms.
### Ridge regression ($L_2$ Regularization)
Perhaps the most common form of regularization is known as *ridge regression* or $L_2$ *regularization*, sometimes also called *Tikhonov regularization*.
This proceeds by penalizing the sum of squares (2-norms) of the model coefficients; in this case, the penalty on the model fit would be
$$
P = \alpha\sum_{n=1}^N \theta_n^2
$$
where $\alpha$ is a free parameter that controls the strength of the penalty.
This type of penalized model is built into Scikit-Learn with the ``Ridge`` estimator:
```
from sklearn.linear_model import Ridge
model = make_pipeline(GaussianFeatures(30), Ridge(alpha=0.1))
basis_plot(model, title='Ridge Regression')
```
The $\alpha$ parameter is essentially a knob controlling the complexity of the resulting model.
In the limit $\alpha \to 0$, we recover the standard linear regression result; in the limit $\alpha \to \infty$, all model responses will be suppressed.
One advantage of ridge regression in particular is that it can be computed very efficiently—at hardly more computational cost than the original linear regression model.
### Lasso regression ($L_1$ regularization)
Another very common type of regularization is known as lasso, and involves penalizing the sum of absolute values (1-norms) of regression coefficients:
$$
P = \alpha\sum_{n=1}^N |\theta_n|
$$
Though this is conceptually very similar to ridge regression, the results can differ surprisingly: for example, due to geometric reasons lasso regression tends to favor *sparse models* where possible: that is, it preferentially sets model coefficients to exactly zero.
We can see this behavior in duplicating the ridge regression figure, but using L1-normalized coefficients:
```
from sklearn.linear_model import Lasso
model = make_pipeline(GaussianFeatures(30), Lasso(alpha=0.001))
basis_plot(model, title='Lasso Regression')
```
With the lasso regression penalty, the majority of the coefficients are exactly zero, with the functional behavior being modeled by a small subset of the available basis functions.
As with ridge regularization, the $\alpha$ parameter tunes the strength of the penalty, and should be determined via, for example, cross-validation (refer back to [Hyperparameters and Model Validation](05.03-Hyperparameters-and-Model-Validation.ipynb) for a discussion of this).
## Example: Predicting Bicycle Traffic
As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.
We have seen this data already in [Working With Time Series](03.11-Working-with-Time-Series.ipynb).
In this section, we will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor.
Fortunately, the NOAA makes available their daily [weather station data](http://www.ncdc.noaa.gov/cdo-web/search?datasetid=GHCND) (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.
We will perform a simple linear regression to relate weather and other information to bicycle counts, in order to estimate how a change in any one of these parameters affects the number of riders on a given day.
In particular, this is an example of how the tools of Scikit-Learn can be used in a statistical modeling framework, in which the parameters of the model are assumed to have interpretable meaning.
As discussed previously, this is not a standard approach within machine learning, but such interpretation is possible for some models.
Let's start by loading the two datasets, indexing by date:
```
!sudo apt-get update
!apt-get -y install curl
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
# !wget -o FremontBridge.csv "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
import pandas as pd
counts = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True)
weather = pd.read_csv('data/BicycleWeather.csv', index_col='DATE', parse_dates=True)
```
Next we will compute the total daily bicycle traffic, and put this in its own dataframe:
```
daily = counts.resample('d').sum()
daily['Total'] = daily.sum(axis=1)
daily = daily[['Total']] # remove other columns
```
We saw previously that the patterns of use generally vary from day to day; let's account for this in our data by adding binary columns that indicate the day of the week:
```
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i in range(7):
daily[days[i]] = (daily.index.dayofweek == i).astype(float)
```
Similarly, we might expect riders to behave differently on holidays; let's add an indicator of this as well:
```
from pandas.tseries.holiday import USFederalHolidayCalendar
cal = USFederalHolidayCalendar()
holidays = cal.holidays('2012', '2016')
daily = daily.join(pd.Series(1, index=holidays, name='holiday'))
daily['holiday'].fillna(0, inplace=True)
```
We also might suspect that the hours of daylight would affect how many people ride; let's use the standard astronomical calculation to add this information:
```
from datetime import datetime
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
daily['daylight_hrs'] = list(map(hours_of_daylight, daily.index))
daily[['daylight_hrs']].plot()
plt.ylim(8, 17)
```
We can also add the average temperature and total precipitation to the data.
In addition to the inches of precipitation, let's add a flag that indicates whether a day is dry (has zero precipitation):
```
# temperatures are in 1/10 deg C; convert to C
weather['TMIN'] /= 10
weather['TMAX'] /= 10
weather['Temp (C)'] = 0.5 * (weather['TMIN'] + weather['TMAX'])
# precip is in 1/10 mm; convert to inches
weather['PRCP'] /= 254
weather['dry day'] = (weather['PRCP'] == 0).astype(int)
daily = daily.join(weather[['PRCP', 'Temp (C)', 'dry day']],rsuffix='0')
```
Finally, let's add a counter that increases from day 1, and measures how many years have passed.
This will let us measure any observed annual increase or decrease in daily crossings:
```
daily['annual'] = (daily.index - daily.index[0]).days / 365.
```
Now our data is in order, and we can take a look at it:
```
daily.head()
```
With this in place, we can choose the columns to use, and fit a linear regression model to our data.
We will set ``fit_intercept = False``, because the daily flags essentially operate as their own day-specific intercepts:
```
# Drop any rows with null values
daily.dropna(axis=0, how='any', inplace=True)
column_names = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', 'holiday',
'daylight_hrs', 'PRCP', 'dry day', 'Temp (C)', 'annual']
X = daily[column_names]
y = daily['Total']
model = LinearRegression(fit_intercept=False)
model.fit(X, y)
daily['predicted'] = model.predict(X)
```
Finally, we can compare the total and predicted bicycle traffic visually:
```
daily[['Total', 'predicted']].plot(alpha=0.5);
```
It is evident that we have missed some key features, especially during the summer time.
Either our features are not complete (i.e., people decide whether to ride to work based on more than just these) or there are some nonlinear relationships that we have failed to take into account (e.g., perhaps people ride less at both high and low temperatures).
Nevertheless, our rough approximation is enough to give us some insights, and we can take a look at the coefficients of the linear model to estimate how much each feature contributes to the daily bicycle count:
```
params = pd.Series(model.coef_, index=X.columns)
params
```
These numbers are difficult to interpret without some measure of their uncertainty.
We can compute these uncertainties quickly using bootstrap resamplings of the data:
```
from sklearn.utils import resample
np.random.seed(1)
err = np.std([model.fit(*resample(X, y)).coef_
for i in range(1000)], 0)
```
With these errors estimated, let's again look at the results:
```
print(pd.DataFrame({'effect': params.round(0),
'error': err.round(0)}))
```
We first see that there is a relatively stable trend in the weekly baseline: there are many more riders on weekdays than on weekends and holidays.
We see that for each additional hour of daylight, 129 ± 9 more people choose to ride; a temperature increase of one degree Celsius encourages 65 ± 4 people to grab their bicycle; a dry day means an average of 548 ± 33 more riders, and each inch of precipitation means 665 ± 62 more people leave their bike at home.
Once all these effects are accounted for, we see a modest increase of 27 ± 18 new daily riders each year.
Our model is almost certainly missing some relevant information. For example, nonlinear effects (such as effects of precipitation *and* cold temperature) and nonlinear trends within each variable (such as disinclination to ride at very cold and very hot temperatures) cannot be accounted for in this model.
Additionally, we have thrown away some of the finer-grained information (such as the difference between a rainy morning and a rainy afternoon), and we have ignored correlations between days (such as the possible effect of a rainy Tuesday on Wednesday's numbers, or the effect of an unexpected sunny day after a streak of rainy days).
These are all potentially interesting effects, and you now have the tools to begin exploring them if you wish!
<!--NAVIGATION-->
< [In Depth: Naive Bayes Classification](05.05-Naive-Bayes.ipynb) | [Contents](Index.ipynb) | [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.06-Linear-Regression.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
| github_jupyter |
# Analytics and demand forecasting for a multi-national retail store
## Notebook by Edward Warothe
### Introduction
General information about this analysis is in the readme file.
There are 4 datasets in these analysis: stores -has location, type and cluster information about the 54 stores in check; items which has family, class and perishable columns; transactions -has daily average transactions for each store; oil -has the daily average oil price per barrel; and train, which has date, store number, item number, unit sales and on promotion columns. We'll analyze all these to look for patterns and\or interesting features in the data.
At the moment of writing this article, I'm using a windows machine, core i7 with 8 GB RAM. Why is this information important? As you'll see the entire dataset takes around 5 GB in csv format which is too much for my machine to handle. My solution was to bucketize the train dataset into years (2013-2017). This method worked out especially well compared to several others I considered. I even have a short [medium article](https://eddywarothe.medium.com/is-your-data-too-big-for-your-ram-b3ed17a74095) comparing the different solutions I tried, results and compromises for each, and my choice, which is really important in case you come across a similar problem.
The images and charts are rendered using plotly's non-interactive renderer, (plotly.io svg). To get full plotly interactivity on your notebook (several hover tool capabilities), if you have an internet conection, delete the plotly.pio and pio.renderers.default='svg' lines below.
```
# Fisrt, we load the required python packages to use in the analyses.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import gc
import time
import pyarrow
from scipy.signal import savgol_filter
from fbprophet import Prophet
import plotly.graph_objects as go
import plotly.express as px
import plotly.io as pio
pio.renderers.default = 'svg'
import warnings
warnings.filterwarnings('ignore')
```
After loading the train dataset, I split it by year and persist it into my storage as a parquet file which preserves the data types I have preset and has approximately has read times 50% faster than csv. Before the split, I will separate the negative unit sale values from the positive ones. These negative values are from item returns. Later on, we'll analyze the returns data.
```
dtype = {'store_nbr':np.int16, 'item_nbr':np.int32, 'unit_sales':np.float32, 'onpromotion':object}
chunks = pd.read_csv('C:/favorita2/train.csv', parse_dates=['date'], dtype=dtype,
chunksize=1000000, usecols=[1,2,3,4,5])
# the below section is in comment form because running it consumes to much resources on my machine
'''
type(chunks)
train = pd.DataFrame()
count = 0
start = time.time()
for chunk in chunks:
train = pd.concat([train, chunk])
print('chunk', count, 'time taken', time.time()-start)
count += 1
returns = train[train['unit_sales']<0]
returns.to_csv('C:/returns.csv')
# then get the symmetric difference of the two
train = train.merge(returns,indicator = True, how='left').loc[lambda x: x['_merge']=='left_only']
# get the years by splitting
year1 = train[train['date'].between('2013-01-01', '2013-12-31')]
year2 = train[train['date'].between('2014-01-01', '2014-12-31')]
year3 = train[train['date'].between('2015-01-01', '2015-12-31')]
year4 = train[train['date'].between('2016-01-01', '2016-12-31')]
year5 = train[train['date'].between('2017-01-01', '2017-12-31')]
# it's failrly easy to save each as a parquet file for later reference and analysis
year1.to_parquet('C:/year1.parquet', engine = 'pyarrow', index=False)
# to load the dataset
year1 = pd.read_parquet('C:/year1.parquet')
'''
```
From this point, analysis becomes easier. Since our focus is on demand forecasting for certain items via time series analysis, we first answer some basic questions about our data :
1. Which items are getting popular as time goes? (so as to stock more of these items depending on the time thy're popular) 2. Which are getting less popular?
3. Which have consistently good sales?
4. What family do the preceding items belong to?
5. How does location affect sales?
6. How does oil price and transaction number affect sales?
7. Which items were returned most? How did returns affect sales?
8. What are the expected forecast sales for September 2017?
We answer these and many more in our quest to extract as much info as possible from the dataset.
Since loading the entire dataset takes up a lot of time and resources, we'll load the chunks, which have been split by year, from now on.
```
items = pd.read_csv('C:/favorita2/items.csv')
year1 = pd.read_parquet('C:/favorita2/year1.parquet')
year2 = pd.read_parquet('C:/favorita2/year2.parquet')
year3 = pd.read_parquet('C:/favorita2/year3.parquet')
year4 = pd.read_parquet('C:/favorita2/year4.parquet')
year5 = pd.read_parquet('C:/favorita2/year5.parquet')
```
#### 1. which items had increasing demand over the years? (increasing number of sales counts)
```
def get_counts(data):
# function to get item count in a particular year
item_count = data.item_nbr.value_counts().to_frame()
item_count = item_count.reset_index(level=0)
item_count.rename(columns={'index':'item_nbr', 'item_nbr':'counts'}, inplace=True)
return item_count
count1 = get_counts(year1)
count2 = get_counts(year2)
count3 = get_counts(year3)
count4 = get_counts(year4)
count5 = get_counts(year5)
count1.head()
```
Next we write a function to get the item count percentage difference, e.g between year1(2013) and year2(2014), for items featured in both years.
```
# get difference in item count for 2 consecutive years
def get_diff(data1, data2):
combined = data1.merge(data2, on = 'item_nbr', how = 'inner', suffixes = ('_a', '_b'))
combined['diff'] = ((combined['counts_b'] - combined['counts_a']) / combined['counts_a']).round(2)
return combined.sort_values(by='diff', ascending=False).merge(items, on='item_nbr', how='left')
diff = get_diff(count1, count2)
diff.head()
```
We can use the percentage differences between consecutive years to answer question 1. But we'll have to filter out other columns except item nbr and diff from the result returned by the get_diff function.
```
# get difference in item count for 2 consecutive years
def get_diff(data1, data2):
combined = data1.merge(data2, on = 'item_nbr', how = 'inner', suffixes = ('_a', '_b'))
combined['diff'] = ((combined['counts_b'] - combined['counts_a']) / combined['counts_a']).round(2)
return combined.sort_values(by='diff', ascending=False).merge(items, on='item_nbr', how='left')[['item_nbr', 'diff']]
diff1 = get_diff(count1,count2)
diff2 = get_diff(count2,count3)
diff3 = get_diff(count3,count4)
diff4 = get_diff(count4,count5)
dfs = [diff1, diff2, diff3, diff4]
from functools import reduce
df = reduce(lambda left,right: pd.merge(left,right, on='item_nbr'), dfs)
df.merge(items, on='item_nbr', how='inner').iloc[:,[0,1,2,3,4,5]].head(5)
```
The diff_x and diff_y keep repeating due to pandas merge behaviour for suffixes. In this case, the diff columns from left to right are percentage differences between the item counts from 2013 until 2017 respectively. That is, diff_x = 2014 item count - 2013 item count, and so on. Note we are limited in the number of items we can preview since the 'inner' parameter from the merge function only selects item numbers common to all the years. This will be remedied later by selecting the difference between the last 3 and 2 years.
#### 2. Which items were consistent perfomers? (those with consistent improvement in sales counts)
Note that since we're getting the differences between each year consecutively, positive values indicate an improvement from the previous year.
```
merged_counts = df.merge(items, on='item_nbr', how='inner').iloc[:,[0,1,2,3,4]]
merged_counts[(merged_counts > 0).all(1)].merge(items, on='item_nbr', how='left')
```
There are only three items that had increased transaction count over the years. The GroceryI item, 910928, is particularly interesting since it's constantly increasing in demand.
Lets have a look at the three years from 2014 since they had more common items.
```
dfs = [diff2, diff3, diff4]
from functools import reduce
df = reduce(lambda left,right: pd.merge(left,right, on='item_nbr'), dfs)
df.merge(items, on='item_nbr', how='inner').iloc[:,[0,1,2,3,4]].head(5)
merged_counts = df.merge(items, on='item_nbr', how='inner').iloc[:,[0,1,2,3]]
merged_counts[(merged_counts > 0).all(1)].merge(items, on='item_nbr', how='inner').sort_values(by=['diff','diff_y'], ascending=False).head(5)
```
There are 22 item that are on increasing demand from 2014 till 2017. We see that 2016 saw some big jumps in demand of select items as compared to the other years. The inventory and restocking options for this items should be a priority for the company.
Using this method, we can easily get the worst perfoming items across the last 4 years(2014 to 2017).
#### 3. Which items had consistently decreasing purchase counts? (which more or less equals demand)
```
merged_counts[(merged_counts.iloc[:,[1,2,3]] < 0).all(1)].merge(items, on='item_nbr', how='inner').family.value_counts()
```
There are 99 items which have had decreasing purchase counts over the last 4 years, with 64% of these items belonging to GroceryI and cleaning.
#### 4. Which items are in demand during the sales spikes?
The next question to answer in the demand planning process is when the spikes happen, and which items are in demand during these spikes. For this analysis, we'll make a time series object by averaging daily unit sales, and utilize the Savitsky-Golay filter to smooth our time series in order to get a view of the general trend and cyclical movements in the data.
```
items = pd.read_csv('C:/favorita2/items.csv')
year1 = pd.read_parquet('C:/favorita2/year1.parquet')
year2 = pd.read_parquet('C:/favorita2/year2.parquet')
year3 = pd.read_parquet('C:/favorita2/year3.parquet')
temp1 = pd.concat([data.groupby('date', as_index=False)['unit_sales'].mean() for data in [year1, year2, year3]])
del([year1, year2, year3])
import gc
gc.collect()
year4 = pd.read_parquet('C:/favorita2/year4.parquet')
year5 = pd.read_parquet('C:/favorita2/year5.parquet')
temp2 = pd.concat([data.groupby('date', as_index=False)['unit_sales'].mean() for data in [year4, year5]])
del([year4,year5])
df = pd.concat([temp1,temp2])
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['date'], y=df['unit_sales']))
fig.add_trace(go.Scatter(x=df['date'],y=savgol_filter(df['unit_sales'],31,3)))
```
There 2 groups of spikes in the time series, one at 2/3rd of every mth-end mth, the other at
mid-mth period 15-17th. We filter spikes with a relatively high daily average unit sales value (12 and above) and see which items featured prominently during these days.
```
df['date'] = pd.to_datetime(df['date'])
spikes = df[df['unit_sales']>12]
spikes.info()
# since loading the entire dataset is out of the question, we 2013 and compare it to the spikes in 2016
y1_spikes = spikes[spikes['date'].dt.year == 2013].merge(year1, on='date', how='inner')
get_counts(y1_spikes).merge(items, on='item_nbr', how='left').iloc[:,[0,1,2]].head(200).family.value_counts()
```
Which stores were featured in these spikes?
```
y1_spikes.store_nbr.value_counts().head(5)
# lets compare to the spikes in 2016
y4_spikes = spikes[spikes['date'].dt.year == 2016].merge(year4, on='date', how='inner')
get_counts(y4_spikes).merge(items, on='item_nbr', how='left').iloc[:,[0,1,2]].head(200).family.value_counts()
y4_spikes.store_nbr.value_counts().head(5) # almost the same performance for stores compared
```
I also came across an interesting feature while looking at individual data spikes. During 2013 certain meat items were the most popular during these spikes as compared to beverage items during 2016.
```
topitems(year1, '2013-06-02').head(10)
topitems(year5, '2017-04-01').head(10)
```
#### 5. How does location affect sales? (What are the different sales trends in these locations)
The answer to this requires a visualization tool like streamlit, tableau or power BI to distinctly represent the different geographies. This is done at the end of this notebook.
#### 6. How did oil and transaction number affect sales?
We graph out a time series for both and look for changes in trend.
```
oil = pd.read_csv('C:/favorita2/oil.csv', parse_dates = [0])
oil.info() # dcoilwtico is the daily oil price
px.line(oil,x='date', y='dcoilwtico', color_discrete_map={'dcoilwtico':'ffa500'})
```
From the graph, we should see a increase in unit sales starting from mid-2014 but the general time series graph does not show a major increase in demand.
Although it would be better suited to look at each store individually, lets analyze the daily average transaction number.
```
transx = pd.read_csv('c:/favorita2/transactions.csv', parse_dates=[0])
transx.info()
grp_transx = transx.groupby('date', as_index=False)['transactions'].mean()
grp_transx.info()
fig = go.Figure()
fig.add_trace(go.Scatter(x=grp_transx['date'], y=grp_transx['transactions']))
fig.add_trace(go.Scatter(x=grp_transx['date'],y=savgol_filter(grp_transx['transactions'],29,3)))
```
The transactions are significant during the end of year holidays. Jan 1st 2014 shows an abnormal drop in average transactions and might warrant further analysis to ascertain the cause.
#### 7. How did returns affect sales?
Returns may signal problems with the items or delivery, and may cost the company in terms of lost revenue or even worse, lost customers. Questions to be answered in this analysis include: what items were returned mostwhen were they returned? is there a pattern in dates? common date? Are returns on the increase? decrease? Where? (a particular store? cluster?)
```
# to load the returns file
returns = pd.read_parquet('C:/favorita2/returns.parquet', engine='pyarrow')
returns.drop(['onpromotion'], axis=1, inplace=True) # do away with onpromotion col since it has only 8 items
returns.unit_sales = returns.unit_sales * -1
returns.info()
items = pd.read_csv('C:/favorita2/items.csv')
items_returned = returns.item_nbr.value_counts().to_frame() # might have defects: supplier or delivery/storage problem
items_returned.reset_index(level=0, inplace=True)
items_returned.rename(columns={'index':'item_nbr', 'item_nbr':'count'}, inplace=True)
items_returned.merge(items, on='item_nbr', how='left').head(15)
```
There were 2548 returns with 98% of those having been returned more than 10 times. The above table shows the top 15 returned items. Contrary to my expectation, most returns do not belong to the GroceryI or beverage item family but from Automotive and Home & Kitchen II. Since the specific items returned have been revealed, the next step for an analyst would be to find out the exact reason for the returns and avoid future errors if possible. Do the returns happen at a specific time? Let's find out:
```
import plotly.express as px
from scipy.signal import savgol_filter
return_date = returns.groupby('date', as_index=False)['item_nbr'].count()
px.line(return_date, x="date", y=savgol_filter(return_date.item_nbr, 29,3))
```
From the above plot, returns are certainly on the increase. This is not desirable since the transactions, and average unit sales time series graphs are almost constant through the five years in question. The causes for increasing trend should be checked and solutions delivered quickly to prevent lost revenue.
The days around April 23rd 2016 show an abnormally large increase in returns. These could be due to the effects of the earthquake on April 16th, as priorities changed for several people with regard to items needed from retail stores.
Which stores had most returned items?
```
store = pd.read_csv('C:/favorita2/stores.csv')
str_returns = returns.store_nbr.value_counts().to_frame().reset_index(level=0)
str_returns.rename(columns={'index':'store_nbr', 'store_nbr':'returns'}, inplace=True)
merged = str_returns.merge(store, on='store_nbr', how='left')
merged.head(7)
merged.type.value_counts()
```
store 18-type B, cluster 16 leads with 483 returns out of the top 27 stores with return counts exceeding 100. Type D and A stores are the majority of the stores with most returns. The states Pichincha and Quito led in terms of location, this could be because it has many stores per unit area as compared to other states.
Which stores lost the greatest revenue for returned items?
```
avg_returns = returns.groupby('store_nbr', as_index=False)['unit_sales'].sum().sort_values(by='unit_sales', ascending=False)
avg_returns.reset_index(drop=True, inplace=True)
avg_returns.merge(store, on='store_nbr', how='left').head(10)
```
Lets have a look into the first 3 stores:
stores that lead in the table had items with more monetary value as compared to the rest, returned
e.g a certain item could have been worth 10000 in unit sales as compared to most which had a unit sale value of 4.
It would be prudent to find out why these high value items are being returned to avoid future loss of revenue
The hight value in unit sales could also be an indicator that the item is popular and has many returns in a given time period.
e.g for store 18, 8 cleaning items led the count in terms of items returned
store 2 - groceryI items
#### 8. What are the forecast sales transactions for September 2017?
To answer this question we'll use the machine learning python algorithm Prophet which was developed by Facebook.
Why Prophet? Prophet is a great machine learning algorithm, it is fully automatic, light-weight, has holiday integration for time series, and relatively easy to understand since it explains the time series in terms of seasonality and trend.
We'll get the forecast for sales transactions as opposed to the forecast for the individual item sales because of a simple reason; computational power. The first week of January in train dataset contains over 1600 items with several having over 1000 counts over that single week. My machine takes over 10 minutes to fit and predict that one weeks worth of data. I'm working through better alternatives to prophet to handle that data.
In the meantime, let's predict the transaction volumes for the stores:
```
transx = pd.read_csv('C:/favorita2/transactions.csv', parse_dates=[0])
transx.rename(columns={'date':'ds', 'transactions':'y'}, inplace=True)
transx.info()
model = Prophet().fit(transx)
future = model.make_future_dataframe(periods=30)
forecast = model.predict(future)
model.plot(forecast);
model.plot_components(forecast);
```
The above graphs show the trends for our forecast according to each timeline;throughout the dataset timeframe, yearly and weekly.
Lets aggregate our visualization by store number for more actionable information.
```
import logging
logging.getLogger('fbprophet').setLevel(logging.WARNING)
grouped = transx.groupby('store_nbr')
final = pd.DataFrame()
for g in grouped.groups:
group = grouped.get_group(g)
m = Prophet(daily_seasonality=False)
m.fit(group)
future = m.make_future_dataframe(periods=30)
forecast = m.predict(future)
forecast = forecast.rename(columns={'yhat': 'store_'+str(g)})
final = pd.merge(final, forecast.set_index('ds'), how='outer', left_index=True, right_index=True)
final = final[['store_' + str(g) for g in grouped.groups.keys()]]
final = final.reset_index(level=0)
final.head()
final = final.replace(float('NaN'), 0)
stores = ['store_'+str(i) for i in range(1,55)]
px.line(final, x='ds', y=[store for store in final[stores]])
```
The above graph shows the daily transactions time series for each store, plus a 30 day forecast for the period between mid-August to mid-September. For better visibility, we'll use a smoothing function to get a glimpse of the trends for each store. I have created a [Power BI dashboard](https://github.com/Edwarothe/Demand-Planning-for-Corporacion-Favorita/blob/main/favorita.pbix) to show the trend dynamics of each store over the 5 year timeline. If you're not able to load the entire pbix file, [this](https://github.com/Edwarothe/Demand-Planning-for-Corporacion-Favorita/blob/main/favorita.pdf) is a pdf snapshot of the analysis.
| github_jupyter |
<a href="https://colab.research.google.com/github/pabloderen/SightlineStudy/blob/master/Sightline.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#!/usr/bin/env python
# coding: utf-8
# # Collision analysis
import pandas as pd
import numpy as np
import itertools
import numba
from numba import vectorize
import os
import time
from numba import cuda
import multiprocessing as mp
from multiprocessing import Pool
from functools import partial
from numpy.lib import recfunctions as rfn
os.environ["OMP_NUM_THREADS"] = "10"
os.environ["OPENBLAS_MAIN_FREE"] = "10"
os.environ['NUMBAPRO_LIBDEVICE'] = "/usr/local/cuda-10.0/nvvm/libdevice"
os.environ['NUMBAPRO_NVVM'] = "/usr/local/cuda-10.0/nvvm/lib64/libnvvm.so"
```
# Main run:
Remember to upload the files
```
# Loger for python console
def log(message):
print('{} , {}'.format(time.time(), message))
@numba.jit(forceobj=True, parallel=True, nopython=True )
def FilterByBBX(Faces, line):
maxX = line[0]
maxY = line[1]
maxZ = line[2]
minX = line[3]
minY = line[4]
minZ = line[5]
midX = (Faces[:,0] + Faces[:,3])/2
mixY = (Faces[:,1] + Faces[:,4])/2
mixZ = (Faces[:,2] + Faces[:,5])/2
aa = np.where((midX >= minX) & ( mixY >= minY) & (mixZ >= minZ) & (midX <= maxX) & (mixY <= maxY) & (mixZ <= maxZ) )
return Faces[aa]
class BoundingBoxCreate():
def __init__(self, line):
XMax = max(line[3],line[0])
YMax = max(line[4],line[1])
ZMax = max(line[5],line[2])
XMin = min(line[3],line[0])
YMin = min(line[4],line[1])
ZMin = min(line[5],line[2])
self.box= np.array([XMax,YMax,ZMax,XMin,YMin,ZMin])
@numba.jit(nopython=True)
def split(aabb):
minX = aabb[3]
maxX = aabb[0]
minY = aabb[4]
maxY = aabb[1]
minZ = aabb[5]
maxZ = aabb[2]
centerX = (minX+ maxX)/2
centerY = (minY + maxY)/2
centerZ = (minZ+ maxY)/2
bbox0 = (maxX, maxY, maxZ, centerX, centerY, centerZ)
bbox1 = (centerX, maxY,maxZ, minX, centerY, centerZ)
bbox2 = (maxX, centerY, maxZ, centerX, minY, centerZ)
bbox3 = (centerX, centerY, maxZ, minX, minY, centerZ)
bbox4 = (maxX, maxY, centerZ, centerX, centerY, minZ)
bbox5 = (centerX, maxY,centerZ, minX, centerY, minZ)
bbox6 = (maxX, centerY, centerZ, centerX, minY, minZ)
bbox7 = (centerX, centerY, centerZ, minX, minY, minZ)
return bbox0, bbox1 , bbox2, bbox3, bbox4, bbox5, bbox6, bbox7
@numba.jit(nopython=True)
def XClipLine(d,vecMax, vecMin, v0, v1, f_low, f_high):
# Method for AABB vs line took from https://github.com/BSVino/MathForGameDevelopers/tree/line-box-intersection
# // Find the point of intersection in this dimension only as a fraction of the total vector
f_dim_low = (vecMin[d] - v0[d])/(v1[d] - v0[d] + 0.0000001)
f_dim_high = (vecMax[d] - v0[d])/(v1[d] - v0[d]+ 0.0000001)
# // Make sure low is less than high
if (f_dim_high < f_dim_low):
f_dim_high, f_dim_low = f_dim_low, f_dim_high
# // If this dimension's high is less than the low we got then we definitely missed.
if (f_dim_high < f_low):
return 0,0
# // Likewise if the low is less than the high.
if (f_dim_low > f_high):
return 0,0
# // Add the clip from this dimension to the previous results
f_low = max(f_dim_low, f_low)
f_high = min(f_dim_high, f_high)
if (f_low > f_high):
return 0,0
return f_low , f_high
@numba.jit(nopython=True)
def LineAABBIntersection(aabbBox,line):
# Method for AABB vs line took from https://github.com/BSVino/MathForGameDevelopers/tree/line-box-intersection
f_low = 0
f_high = 1
v0, v1 = (line[0], line[1], line[2]), (line[3], line[4], line[5])
vecMax = aabbBox[:3]
vecMin = aabbBox[-3:]
x = XClipLine(0, vecMax, vecMin , v0, v1, f_low, f_high)
if not x:
return False
x = XClipLine(1, vecMax, vecMin , v0, v1, x[0], x[1])
if x == (0,0) :
return False
x = XClipLine(2, vecMax, vecMin , v0, v1, x[0], x[1])
if x == (0,0):
return False
return True
# @numba.jit(forceobj=True, parallel=True)
def checkFaces(Faces, line):
for f in Faces:
if LineAABBIntersection(f,line):
return True
return False
# @numba.jit(forceobj=True, parallel=True)
def checklines(meshes, line):
global count
global totalLines
if (count % 10) == 0:
# print("=", end ="")
print("\r {} of {} total lines".format( str(count),totalLines ), end ="")
count = count + 1
for b in meshes:
#check if line line intersect with mesh
bbx = b.boundingBox
if LineAABBIntersection(bbx, line):
#if true split bbx in 4
splitted = b.parts
for s in splitted:
for ss in s.parts:
if LineAABBIntersection( ss.boundingBox , line):
for sss in ss.parts:
check = checkFaces(sss.faces,line)
if check:
return True
return False
from google.colab import drive
drive.mount('/content/drive')
```
# Mesh class
```
class Mesh():
def __init__(self, mesh, faces):
self.Id = mesh[6]
self.boundingBox = BoundingBoxCreate(mesh).box
self.parts = [MeshPart(self.Id, x, faces ) for x in split(self.boundingBox) ]
class MeshPart():
def __init__(self,Id, boundingBox,faces):
self.boundingBox = BoundingBoxCreate(boundingBox).box
ff = faces[faces[:,6] == Id]
filteredFaces = FilterByBBX(ff, boundingBox)
drop = np.delete(filteredFaces, np.s_[6:], axis=1)
self.parts = []
if drop.any():
self.parts = [MeshSubPart(Id, x, faces ) for x in split(self.boundingBox) ]
class MeshSubPart():
def __init__(self,Id, boundingBox,faces):
self.boundingBox = BoundingBoxCreate(boundingBox).box
ff = faces[faces[:,6] == Id]
filteredFaces = FilterByBBX(ff, boundingBox)
drop = np.delete(filteredFaces, np.s_[6:], axis=1)
self.parts = []
if drop.any():
self.parts = [MeshSubSubPart(Id, x, faces ) for x in split(self.boundingBox) ]
class MeshSubSubPart():
def __init__(self,Id, boundingBox,faces):
self.boundingBox = BoundingBoxCreate(boundingBox).box
ff = faces[faces[:,6] == Id]
filteredFaces = FilterByBBX(ff, boundingBox)
drop = np.delete(filteredFaces, np.s_[6:], axis=1)
self.faces =[ BoundingBoxCreate(d).box for d in drop]
def createObjects(meshes, faces):
objects = []
for m in meshes:
objects.append(Mesh(m, faces))
return objects
count = 0
if __name__ is '__main__':
start= time.time()
pov_ = pd.read_csv(r"/content/pov_.csv",header=None )
pov_.columns = ["x","y","z" ]
print('{} Points of View'.format(len(pov_)))
pov_.head(10)
# ## Reading targets (points over meshes)
target_ = pd.read_csv(r"/content/targets_.csv",header=None )
target_.columns = ["x1","y1","z1" ]
print('{} targets or points of interest'.format(len(target_)))
target_.head()
# ## Reading meshes bounding box
meshes_ = pd.read_csv(r"/content/context_.csv",header=None, index_col=0 )
meshes_.columns = ["xMax","yMax","zMax","xMin","yMin","zMin","id" ]
print('{} meshes in context set'.format(len(meshes_)))
meshes_.head()
# ## Reading meshes faces
mesh_faces = pd.read_csv(r"/content/mesh_faces.csv",header=None )
mesh_faces.columns = ["xMax","yMax","zMax","xMin","yMin","zMin", "id" ]
print('{} meshes faces in set'.format(len(mesh_faces)))
mesh_faces.head()
# ## Creating all cross product of points vs targets to represent the lines of view
lines = pov_
lines = lines.assign(foo=1).merge(target_.assign(foo=1)).drop('foo', 1)
lines = lines.drop_duplicates()
lines = lines.reset_index()
lines = lines.drop(['index'], axis=1)
totalLines = len(lines)
print('{} lines between POV and targets'.format(len(lines)))
# ## Finding mesh intersection
result = createObjects(meshes_.values, mesh_faces.values)
# funB = partial(checklines, result) # total time 211.28 seconds
# resultsB = pool.map(funB,lines.values)
resultsB = [checklines(result, x) for x in lines.values] # total time 192.399 seconds | 1378 lines between POV and targets
print("")
lines['hits']= resultsB
positive = len(lines[lines['hits'] == False])
print('{} lines with clean sight from POV to targets'.format(positive))
negative = len(lines[lines['hits'] == True])
print('{} lines with possible context intersection'.format(negative))
# ## Saving lines with no intersection
lines[ lines['hits'] == False].to_csv('miss.csv')
# ## Saving lines with possible intersection
lines[ lines['hits'] == True].to_csv('hits.csv')
end = time.time()
print('total time {} seconds'.format(round(end-start, 3)))
print('{},{},{},{},{},{},{},{},{} '.format("POV","Target","Meshes","Meshes faces", "Lines of sight", "Hits", "Miss","time", "Comments"))
data = '{},{},{},{},{},{},{},{},{}'.format(len(pov_), len(target_), len(meshes_), len(mesh_faces), len(lines), positive, negative, round(end-start, 3), "3 Level nesting + numba" )
with open('/content/drive/My Drive/sightlinelog.csv','a') as fd:
fd.write(data + "\n")
print(data)
```
| github_jupyter |
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).
# Pandas
Credits: The following are notes taken while working through [Python for Data Analysis](http://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793) by Wes McKinney
* Series
* DataFrame
* Reindexing
* Dropping Entries
* Indexing, Selecting, Filtering
* Arithmetic and Data Alignment
* Function Application and Mapping
* Sorting and Ranking
* Axis Indices with Duplicate Values
* Summarizing and Computing Descriptive Statistics
* Cleaning Data (Under Construction)
* Input and Output (Under Construction)
```
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
```
## Series
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels. The data can be any NumPy data type and the labels are the Series' index.
Create a Series:
```
ser_1 = Series([1, 1, 2, -3, -5, 8, 13])
ser_1
```
Get the array representation of a Series:
```
ser_1.values
```
Index objects are immutable and hold the axis labels and metadata such as names and axis names.
Get the index of the Series:
```
ser_1.index
```
Create a Series with a custom index:
```
ser_2 = Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
```
Get a value from a Series:
```
ser_2[4] == ser_2['e']
```
Get a set of values from a Series by passing in a list:
```
ser_2[['c', 'a', 'b']]
```
Get values great than 0:
```
ser_2[ser_2 > 0]
```
Scalar multiply:
```
ser_2 * 2
```
Apply a numpy math function:
```
import numpy as np
np.exp(ser_2)
```
A Series is like a fixed-length, ordered dict.
Create a series by passing in a dict:
```
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = Series(dict_1)
ser_3
```
Re-order a Series by passing in an index (indices not found are NaN):
```
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = Series(dict_1, index=index)
ser_4
```
Check for NaN with the pandas method:
```
pd.isnull(ser_4)
```
Check for NaN with the Series method:
```
ser_4.isnull()
```
Series automatically aligns differently indexed data in arithmetic operations:
```
ser_3 + ser_4
```
Name a Series:
```
ser_4.name = 'foobarbazqux'
```
Name a Series index:
```
ser_4.index.name = 'label'
ser_4
```
Rename a Series' index in place:
```
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
```
## DataFrame
A DataFrame is a tabular data structure containing an ordered collection of columns. Each column can have a different type. DataFrames have both row and column indices and is analogous to a dict of Series. Row and column operations are treated roughly symmetrically. Columns returned when indexing a DataFrame are views of the underlying data, not a copy. To obtain a copy, use the Series' copy method.
Create a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
```
Create a DataFrame specifying a sequence of columns:
```
df_2 = DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
```
Like Series, columns that are not present in the data are NaN:
```
df_3 = DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
```
Retrieve a column by key, returning a Series:
```
df_3['state']
```
Retrive a column by attribute, returning a Series:
```
df_3.year
```
Retrieve a row by position:
```
df_3.ix[0]
```
Update a column by assignment:
```
df_3['unempl'] = np.arange(5)
df_3
```
Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series):
```
unempl = Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
```
Assign a new column that doesn't exist to create a new column:
```
df_3['state_dup'] = df_3['state']
df_3
```
Delete a column:
```
del df_3['state_dup']
df_3
```
Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified):
```
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = DataFrame(pop)
df_4
```
Transpose the DataFrame:
```
df_4.T
```
Create a DataFrame from a dict of Series:
```
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = DataFrame(data_2)
df_5
```
Set the DataFrame index name:
```
df_5.index.name = 'year'
df_5
```
Set the DataFrame columns name:
```
df_5.columns.name = 'state'
df_5
```
Return the data contained in a DataFrame as a 2D ndarray:
```
df_5.values
```
If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns:
```
df_3.values
```
## Reindexing
Create a new object with the data conformed to a new index. Any missing values are set to NaN.
```
df_3
```
Reindexing rows returns a new frame with the specified index:
```
df_3.reindex(list(reversed(range(0, 6))))
```
Missing values can be set to something other than NaN:
```
df_3.reindex(range(6, 0), fill_value=0)
```
Interpolate ordered data like a time series:
```
ser_5 = Series(['foo', 'bar', 'baz'], index=[0, 2, 4])
ser_5.reindex(range(5), method='ffill')
ser_5.reindex(range(5), method='bfill')
```
Reindex columns:
```
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
```
Reindex rows and columns while filling rows:
```
df_3.reindex(index=list(reversed(range(0, 6))),
fill_value=0,
columns=['state', 'pop', 'unempl', 'year'])
```
Reindex using ix:
```
df_6 = df_3.ix[range(0, 7), ['state', 'pop', 'unempl', 'year']]
df_6
```
## Dropping Entries
Drop rows from a Series or DataFrame:
```
df_7 = df_6.drop([0, 1])
df_7
```
Drop columns from a DataFrame:
```
df_7 = df_7.drop('unempl', axis=1)
df_7
```
## Indexing, Selecting, Filtering
Series indexing is similar to NumPy array indexing with the added bonus of being able to use the Series' index values.
```
ser_2
```
Select a value from a Series:
```
ser_2[0] == ser_2['a']
```
Select a slice from a Series:
```
ser_2[1:4]
```
Select specific values from a Series:
```
ser_2[['b', 'c', 'd']]
```
Select from a Series based on a filter:
```
ser_2[ser_2 > 0]
```
Select a slice from a Series with labels (note the end point is inclusive):
```
ser_2['a':'b']
```
Assign to a Series slice (note the end point is inclusive):
```
ser_2['a':'b'] = 0
ser_2
```
Pandas supports indexing into a DataFrame.
```
df_6
```
Select specified columns from a DataFrame:
```
df_6[['pop', 'unempl']]
```
Select a slice from a DataFrame:
```
df_6[:2]
```
Select from a DataFrame based on a filter:
```
df_6[df_6['pop'] > 5]
```
Perform a scalar comparison on a DataFrame:
```
df_6 > 5
```
Perform a scalar comparison on a DataFrame, retain the values that pass the filter:
```
df_6[df_6 > 5]
```
Select a slice of rows from a DataFrame (note the end point is inclusive):
```
df_6.ix[2:3]
```
Select a slice of rows from a specific column of a DataFrame:
```
df_6.ix[0:2, 'pop']
```
Select rows based on an arithmetic operation on a specific row:
```
df_6.ix[df_6.unempl > 5.0]
```
## Arithmetic and Data Alignment
Adding Series objects results in the union of index pairs if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
ser_6 = Series(np.random.randn(5),
index=['a', 'b', 'c', 'd', 'e'])
ser_6
np.random.seed(1)
ser_7 = Series(np.random.randn(5),
index=['a', 'c', 'e', 'f', 'g'])
ser_7
ser_6 + ser_7
```
Set a fill value instead of NaN for indices that do not overlap:
```
ser_6.add(ser_7, fill_value=0)
```
Adding DataFrame objects results in the union of index pairs for rows and columns if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
df_8 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
```
Set a fill value instead of NaN for indices that do not overlap:
```
df_10 = df_8.add(df_9, fill_value=0)
df_10
```
Like NumPy, pandas supports arithmetic operations between DataFrames and Series.
Match the index of the Series on the DataFrame's columns, broadcasting down the rows:
```
ser_8 = df_10.ix[0]
df_11 = df_10 - ser_8
df_11
```
Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match:
```
ser_9 = Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
```
Broadcast over the columns and match the rows (axis=0) by using an arithmetic method:
```
df_10
ser_10 = Series([100, 200, 300])
ser_10
df_10.sub(ser_10, axis=0)
```
## Function Application and Mapping
NumPy ufuncs (element-wise array methods) operate on pandas objects:
```
df_11 = np.abs(df_11)
df_11
```
Apply a function on 1D arrays to each column:
```
func_1 = lambda x: x.max() - x.min()
df_11.apply(func_1)
```
Apply a function on 1D arrays to each row:
```
df_11.apply(func_1, axis=1)
```
Apply a function and return a DataFrame:
```
func_2 = lambda x: Series([x.min(), x.max()], index=['min', 'max'])
df_11.apply(func_2)
```
Apply an element-wise Python function to a DataFrame:
```
func_3 = lambda x: '%.2f' %x
df_11.applymap(func_3)
```
Apply an element-wise Python function to a Series:
```
df_11['a'].map(func_3)
```
## Sorting and Ranking
```
ser_4
```
Sort a Series by its index:
```
ser_4.sort_index()
```
Sort a Series by its values:
```
ser_4.sort_values()
df_12 = DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
```
Sort a DataFrame by its index:
```
df_12.sort_index()
```
Sort a DataFrame by columns in descending order:
```
df_12.sort_index(axis=1, ascending=False)
```
Sort a DataFrame's values by column:
```
df_12.sort_values(by=['d', 'c'])
```
Ranking is similar to numpy.argsort except that ties are broken by assigning each group the mean rank:
```
ser_11 = Series([7, -5, 7, 4, 2, 0, 4, 7])
ser_11 = ser_11.sort_values()
ser_11
ser_11.rank()
```
Rank a Series according to when they appear in the data:
```
ser_11.rank(method='first')
```
Rank a Series in descending order, using the maximum rank for the group:
```
ser_11.rank(ascending=False, method='max')
```
DataFrames can rank over rows or columns.
```
df_13 = DataFrame({'foo' : [7, -5, 7, 4, 2, 0, 4, 7],
'bar' : [-5, 4, 2, 0, 4, 7, 7, 8],
'baz' : [-1, 2, 3, 0, 5, 9, 9, 5]})
df_13
```
Rank a DataFrame over rows:
```
df_13.rank()
```
Rank a DataFrame over columns:
```
df_13.rank(axis=1)
```
## Axis Indexes with Duplicate Values
Labels do not have to be unique in Pandas:
```
ser_12 = Series(range(5), index=['foo', 'foo', 'bar', 'bar', 'baz'])
ser_12
ser_12.index.is_unique
```
Select Series elements:
```
ser_12['foo']
```
Select DataFrame elements:
```
df_14 = DataFrame(np.random.randn(5, 4),
index=['foo', 'foo', 'bar', 'bar', 'baz'])
df_14
df_14.ix['bar']
```
## Summarizing and Computing Descriptive Statistics
Unlike NumPy arrays, Pandas descriptive statistics automatically exclude missing data. NaN values are excluded unless the entire row or column is NA.
```
df_6
df_6.sum()
```
Sum over the rows:
```
df_6.sum(axis=1)
```
Account for NaNs:
```
df_6.sum(axis=1, skipna=False)
```
## Cleaning Data (Under Construction)
* Replace
* Drop
* Concatenate
```
from pandas import Series, DataFrame
import pandas as pd
```
Setup a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
```
### Replace
Replace all occurrences of a string with another string, in place (no copy):
```
df_1.replace('VA', 'VIRGINIA', inplace=True)
df_1
```
In a specified column, replace all occurrences of a string with another string, in place (no copy):
```
df_1.replace({'state' : { 'MD' : 'MARYLAND' }}, inplace=True)
df_1
```
### Drop
Drop the 'population' column and return a copy of the DataFrame:
```
df_2 = df_1.drop('population', axis=1)
df_2
```
### Concatenate
Concatenate two DataFrames:
```
data_2 = {'state' : ['NY', 'NY', 'NY', 'FL', 'FL'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [6.0, 6.1, 6.2, 3.0, 3.1]}
df_3 = DataFrame(data_2)
df_3
df_4 = pd.concat([df_1, df_3])
df_4
```
## Input and Output (Under Construction)
* Reading
* Writing
```
from pandas import Series, DataFrame
import pandas as pd
```
### Reading
Read data from a CSV file into a DataFrame (use sep='\t' for TSV):
```
df_1 = pd.read_csv("../data/ozone.csv")
```
Get a summary of the DataFrame:
```
df_1.describe()
```
List the first five rows of the DataFrame:
```
df_1.head()
```
### Writing
Create a copy of the CSV file, encoded in UTF-8 and hiding the index and header labels:
```
df_1.to_csv('../data/ozone_copy.csv',
encoding='utf-8',
index=False,
header=False)
```
View the data directory:
```
!ls -l ../data/
```
| github_jupyter |
```
import os
import re
import pickle
import time
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from scipy.sparse import csr_matrix, vstack
%matplotlib inline
# Custom modules
import const
import func
lut = pd.read_csv(const.LOOK_UP_TABLE)
lut.head(3)
use_sets = 0 # 0: both, 1: train, 2:test
dat = func.load_data_file(const.TRAIN_FILES[2])
dat_train = dat['data']['features']
id_train = dat['data']['ids']
dat = func.load_data_file(const.TEST_FILES[2])
if use_sets==0:
dat_data = vstack([dat_train, dat['data']['features']], format='csr')
ids = pd.concat([id_train, dat['data']['ids']], axis=0)
elif use_sets==1: # Only train
dat_data = dat_train
ids = id_train
elif use_sets==2: # Only test
dat_data = dat['data']['features']
ids = dat['data']['ids']
del dat, dat_train, id_train
```
## Calculate features based on jayjay definition
```
# Prepare dataframe with maximum timestamp per sample
df = pd.DataFrame(columns=['L0max','L1max','L2max','L3max'], index=ids.Id)
for l in range(4):
# Get column numbers of sparse matrix belonging to this line
col_date = [int(i) for i in lut[lut['line']==l].col_dat.values if not np.isnan(i)]
# Get maximum timestamp for tis line
df['L{}max'.format(l)] = dat_data[:, col_date].max(1).todense().A1
# Because of sparse matrix NA were encoded as zero
df['L{}max'.format(l)].replace(0, np.nan, inplace=True)
#
df['L{}max'.format(l)].round(2)
# Get row index as column to check sorting afterwards
df.reset_index(inplace=True)
df.reset_index(inplace=True)
# Sort by ID
df.sort_values(['Id'], inplace=True)
# Add columns with next and previous timestamp per line
for col in df.columns:
df[col + '_prev'] = df[col].shift(1)
df[col + '_next'] = df[col].shift(-1)
df.set_index('Id', inplace=True)
# List to keep track of all the columns we generated. Useful for some operations later.
feat_cols = []
for l in range(4):
# Samples are the same if they are both NA or have the same timestamp
df['sameL{}_next'.format(l)] = 1 * (df['L{}max'.format(l)]==df['L{}max_next'.format(l)]).astype(int) + \
1 * ((df['L{}max'.format(l)].isnull()) & (df['L{}max_next'.format(l)].isnull())).astype(int)
# Samples are the same if they are both NA or have the same timestamp
df['sameL{}_prev'.format(l)] = 1 * (df['L{}max'.format(l)]==df['L{}max_prev'.format(l)]).astype(int) + \
1 * ((df['L{}max'.format(l)].isnull()) & (df['L{}max_prev'.format(l)].isnull())).astype(int)
feat_cols += ['sameL{}_prev'.format(l), 'sameL{}_next'.format(l)]
df.sort_values('index', inplace=True)
df.head()
df[feat_cols].to_csv(os.path.join(const.DATA_PATH, 'feat_set_jayjay_same_L_{}.csv'.format(use_sets)), index_label='ID')
```
## Calculate features based on base line definition
```
# First get max per line for all train and test samples
df = pd.DataFrame(columns=['L0max','L1max','L2max','L3max'], index=ids.Id)
for l in range(4):
col_date = [int(i) for i in lut[lut['line']==l].col_dat.values if not np.isnan(i)]
df['L{}max'.format(l)] = dat_data[:, col_date].max(1).todense().A1
df['L{}max'.format(l)].replace(0, np.nan, inplace=True)
df['L{}max'.format(l)].round(2)
# Get row index as column to check sorting afterwards
df.reset_index(inplace=True)
df.reset_index(inplace=True)
# Sort by ID
df.sort_values(['Id'], inplace=True)
for col in df.columns:
df[col + '_prev'] = df[col].shift(1)
df[col + '_next'] = df[col].shift(-1)
df.set_index('Id', inplace=True)
feat_cols = []
for l in range(4):
df['sameL{}_next'.format(l)] = 2 * (df['L{}max'.format(l)]==df['L{}max_next'.format(l)]).astype(int) + \
1 * ((df['L{}max'.format(l)].isnull()) & (df['L{}max_next'.format(l)].isnull())).astype(int)
df['sameL{}_prev'.format(l)] = 2 * (df['L{}max'.format(l)]==df['L{}max_prev'.format(l)]).astype(int) + \
1 * ((df['L{}max'.format(l)].isnull()) & (df['L{}max_prev'.format(l)].isnull())).astype(int)
feat_cols += ['sameL{}_prev'.format(l), 'sameL{}_next'.format(l)]
df.sort_values('index', inplace=True)
feat_cols
df[feat_cols].to_csv(os.path.join(const.DATA_PATH, 'feat_set_jayjay_same_L_new_{}.csv'.format(use_sets)), index_label='ID')
```
## Calculate features based on new line definition
```
line_V2s = lut['line_V2'].unique()
print line_V2s
# First get max per line for all train and test samples
df = pd.DataFrame(columns=['L{}_V2_MAX'.format(x) for x in line_V2s], index=ids.Id)
for l in line_V2s:
col_date = [int(i) for i in lut[lut['line_V2']==l].col_dat.values if not np.isnan(i)]
df['L{}_V2_MAX'.format(l)] = dat_data[:, col_date].max(1).todense().A1
df['L{}_V2_MAX'.format(l)].replace(0, np.nan, inplace=True)
# To go row index to check sorting afterwards
df.reset_index(inplace=True)
df.reset_index(inplace=True)
# Sort by ID
df.sort_values(['Id'], inplace=True)
for col in df.columns:
df[col + '_prev'] = df[col].shift(1)
df[col + '_next'] = df[col].shift(-1)
df.set_index('Id', inplace=True)
feat_cols = []
for l in line_V2s:
df['sameL{}_V2_next'.format(l)] = 2 * (df['L{}_V2_MAX'.format(l)]==df['L{}_V2_MAX_next'.format(l)]).astype(int) + \
1 * ((df['L{}_V2_MAX'.format(l)].isnull()) & (df['L{}_V2_MAX_next'.format(l)].isnull())).astype(int)
df['sameL{}_V2_prev'.format(l)] = 2 * (df['L{}_V2_MAX'.format(l)]==df['L{}_V2_MAX_prev'.format(l)]).astype(int) + \
1 * ((df['L{}_V2_MAX'.format(l)].isnull()) & (df['L{}_V2_MAX_prev'.format(l)].isnull())).astype(int)
feat_cols += ['sameL{}_V2_prev'.format(l), 'sameL{}_V2_next'.format(l)]
df.sort_values('index', inplace=True)
df[feat_cols].to_csv(os.path.join(const.DATA_PATH, 'feat_set_V2_same_L_new_{}.csv'.format(use_sets)), index_label='ID')
```
| github_jupyter |
# Get data from CSVs
In this exercise, you'll create a data frame from a CSV file. The United States makes available CSV files containing tax data by ZIP or postal code, allowing us to analyze income information in different parts of the country. We'll focus on a subset of the data, vt_tax_data_2016.csv, which has select tax statistics by ZIP code in Vermont in 2016.
To load the data, you'll need to import the pandas library, then read vt_tax_data_2016.csv and assign the resulting data frame to a variable. Then we'll have a look at the data.
```
# Import pandas as pd
import pandas as pd
# Read the CSV and assign it to the variable data
data = pd.read_csv('../datasets/vt_tax_data_2016.csv')
# View the first few lines of data
data.head()
```
# Get data from other flat files
While CSVs are the most common kind of flat file, you will sometimes find files that use different delimiters. read_csv() can load all of these with the help of the sep keyword argument. By default, pandas assumes that the separator is a comma, which is why we do not need to specify sep for CSVs.
The version of Vermont tax data here is a tab-separated values file (TSV), so you will need to use sep to pass in the correct delimiter when reading the file. Remember that tabs are represented as \t. Once the file has been loaded, the remaining code groups the N1 field, which contains income range categories, to create a chart of tax returns by income category.
```
import matplotlib.pyplot as plt
import pandas as pd
# Load TSV using the sep keyword argument to set delimiter
data = pd.read_csv('../datasets/vt_tax_data_2016.csv', sep=',')
# Plot the total number of tax returns by income group
counts = data.groupby("agi_stub").N1.sum()
counts.plot.bar()
plt.show()
```
# Import a subset of columns
The Vermont tax data contains 147 columns describing household composition, income sources, and taxes paid by ZIP code and income group. Most analyses don't need all these columns. In this exercise, you will create a data frame with fewer variables using read_csv()s usecols argument.
Let's focus on household composition to see if there are differences by geography and income level. To do this, we'll need columns on income group, ZIP code, tax return filing status (e.g., single or married), and dependents. The data uses codes for variable names, so the specific columns needed are in the instructions.
pandas has already been imported as pd.
```
# Import pandas as pd
import pandas as pd
# Create list of columns to use
cols = ['zipcode', 'agi_stub', 'mars1', 'MARS2', 'NUMDEP']
# Create data frame from csv using only selected columns
data = pd.read_csv("../datasets/vt_tax_data_2016.csv", usecols=cols)
# View counts of dependents and tax returns by income level
print(data.groupby("agi_stub").sum())
```
# Import a file in chunks
When working with large files, it can be easier to load and process the data in pieces. Let's practice this workflow on the Vermont tax data.
The first 500 rows have been loaded as vt_data_first500. You'll get the next 500 rows. To do this, you'll employ several keyword arguments: nrows and skiprows to get the correct records, header to tell pandas the data does not have column names, and names to supply the missing column names. You'll also want to use the list() function to get column names from vt_data_first500 to reuse.
pandas has been imported as pd.
```
vt_data_first500 = pd.read_csv("../datasets/vt_tax_data_2016.csv",
nrows=500)
# Create data frame of next 500 rows with labeled columns
vt_data_next500 = pd.read_csv("../datasets/vt_tax_data_2016.csv",
nrows=500,
skiprows=500,
header=None,
names=vt_data_first500.columns)
# View the Vermont data frames to confirm they're different
(vt_data_first500.head())
(vt_data_next500.head())
```
# Specify data types
When loading a flat file, pandas infers the best data type for each column. Sometimes its guesses are off, particularly for numbers that represent groups or qualities instead of quantities.
Looking at the data dictionary for vt_tax_data_2016.csv reveals two such columns. The agi_stub column contains numbers that correspond to income categories, and zipcode has 5-digit values that should be strings -- treating them as integers means we lose leading 0s, which are meaningful. Let's specify the correct data types with the dtype argument.
pandas has been imported for you as pd.
# Specify data types
When loading a flat file, pandas infers the best data type for each column. Sometimes its guesses are off, particularly for numbers that represent groups or qualities instead of quantities.
Looking at the data dictionary for vt_tax_data_2016.csv reveals two such columns. The agi_stub column contains numbers that correspond to income categories, and zipcode has 5-digit values that should be strings -- treating them as integers means we lose leading 0s, which are meaningful. Let's specify the correct data types with the dtype argument.
pandas has been imported for you as pd.
```
# Load csv with no additional arguments
data = pd.read_csv("../datasets/vt_tax_data_2016.csv")
# Print the data types
print(data.dtypes)
# Create dict specifying data types for agi_stub and zipcode
data_types = {'agi_stub': 'category', 'zipcode': 'str'}
# Load csv using dtype to set correct data types
data = pd.read_csv("../datasets/vt_tax_data_2016.csv", dtype=data_types)
# Print data types of resulting frame
print(data.dtypes.head())
```
# Set custom NA values
Part of data exploration and cleaning consists of checking for missing or NA values and deciding how to account for them. This is easier when missing values are treated as their own data type. and there are pandas functions that specifically target such NA values. pandas automatically treats some values as missing, but we can pass additional NA indicators with the na_values argument. Here, you'll do this to ensure that invalid ZIP codes in the Vermont tax data are coded as NA.
pandas has been imported as pd.
```
# Create dict specifying that 0s in zipcode are NA values
null_values = {'zipcode': 0}
# Load csv using na_values keyword argument
data = pd.read_csv("../datasets/vt_tax_data_2016.csv",
na_values=null_values)
# View rows with NA ZIP codes
print(data[data.zipcode.isna()])
```
# Skip bad data
In this exercise you'll use read_csv() parameters to handle files with bad data, like records with more values than columns. By default, trying to import such files triggers a specific error, pandas.io.common.CParserError.
Some lines in the Vermont tax data here are corrupted. In order to load the good lines, we need to tell pandas to skip errors. We also want pandas to warn us when it skips a line so we know the scope of data issues.
pandas has been imported as pd. The exercise code will try to read the file. If there is a pandas.io.common.CParserError, the code in the except block will run.
```
try:
# Import the CSV without any keyword arguments
data = pd.read_csv('../datasets/vt_tax_data_2016_corrupt.csv')
# View first 5 records
print(data.head())
except pd.parser.CParserError:
print("Your data contained rows that could not be parsed.")
try:
# Import CSV with error_bad_lines set to skip bad records
data = pd.read_csv("vt_tax_data_2016_corrupt.csv",
error_bad_lines=False)
# View first 5 records
print(data.head())
except pd.io.common.CParserError:
print("Your data contained rows that could not be parsed.")
try:
# Set warn_bad_lines to issue warnings about bad records
data = pd.read_csv("vt_tax_data_2016_corrupt.csv",
error_bad_lines=False,
warn_bad_lines=True)
# View first 5 records
print(data.head())
except pd.io.common.CParserError:
print("Your data contained rows that could not be parsed.")
```
| github_jupyter |
# Microbiome experiment step-by-step analysis
This is a jupyter notebook example of how to load, process and plot data from a microbiome experiment using Calour.
## Setup
### Import the calour module
```
import calour as ca
```
### (optional) Set the level of feedback messages from calour
can use:
* 1 for debug (lots of feedback on each command)
* 11 for info (useful information from some commands)
* 21 for warning (just warning messages)
The Calour default is warning (21)
```
ca.set_log_level(11)
```
### Also enable interactive plots inside the jupyter notebook
```
%matplotlib notebook
```
## Loading the data
For an amplicon experiment we use **ca.read_amplicon()**
First parameter is the location+name of the biom table file (can be hdf5/json/txt biom table - see here for details)
Second (optional) parameter is the sample mapping file locaion+name. First column should be the sample id (identical to the sample ids in the biom table). Rest of the column are information fields about each sample.
normalize=XXX : tells calour to rescale each sample to XXX reads (by dividing each feature frequency by the total number of reads in the sample and multiplying by XXX). Alternatively, can use normalize=None to skip normalization (i.e. in the case the biom table is already rarified)
min_reads=XXX : throw away samples with less than min_reads total (before normalization). Useful to get rid of samples with small number of reads. Can use min_reads=None to keep all samples.
We will use the data from:
Giloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.
Reduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.
Microbiome, 4(1), p.30.
```
dat=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
print(dat)
```
## Process the data
### Get rid of the features (bacteria) with small amount of reads
We throw away all features with total reads (over all samples) < 10 (after each sample was normalized to 10k reads/sample). So a bacteria present (with 1 read) in 10 samples will be kept, as well as a bacteria present in only one sample, with 10 reads in this sample.
Note alternatively we could filter based on mean reads/sample or fraction of samples where the feature is present. Each method filters away slightly different bacteria. See **filtering** notebook for details on the filtering functions.
```
dat=dat.filter_sum_abundance(10)
```
### Cluster (reorder) the features so similarly behaving bacteria are close to each other
Features are clustered (hierarchical clustering) based on euaclidian distance between features (over all samples) following normalizing each feature to mean 0 std 1. For more details and examples, see **sorting** notebook or **cluster_features documentation**
* Note that if we have a lot of features, clustering is slow, so it is recommended to first filter away the non-interesting features.
```
datc=dat.cluster_features()
```
### Sort the samples according to physical functioning and Disease state
Note that order within each group of similar value is maintained. We first sort by physical functioning, then sort by the disease state. So within each disease state, samples will still be sorted by physical functioning.
```
datc=datc.sort_samples('Physical_functioning')
datc=datc.sort_samples('Subject')
```
## Plotting the data
Columns (x-axis) are the samples, rows (y-axis) are the features. We will show on the x-axis the host-individual field of each sample.
we will use the jupyter notebook GUI so we will see the interactive plot in the notebook. Alternatively we could use the qt5 GUI to see the plot in a separate standalone window.
A few cool things we can do with the interactive plot:
* Click with the mouse on the heatmap to see details about the feature/sample selected (including information from **dbBact**).
* use SHIFT+UP or SHIFT+DOWN to zoom in/out on the features
* use UP/DOWN to scroll up/down on the features
* use SHIFT+RIGHT or SHIFT+LEFT to zoom in/out on the samples
* use RIGHT/LEFT to scroll left/right on the samples
See **here** for more details
```
datc.plot(sample_field='Subject', gui='jupyter')
```
### Adding a field to the top bar
Now let's add the values of the "Sex" field into the xbar on top
First we'll also sort by sex, so values will be continuous (note we then sort by the disease state to get the two groups separated).
```
datc=datc.sort_samples('Sex')
datc=datc.sort_samples('Subject')
datc.plot(sample_field='Subject', gui='jupyter',barx_fields=['Sex'])
```
### Differential abundance testing
Let's look for bacteria separating sick from healthy
We ask it to find all bacteria significantly different between samples with 'Control' and 'Patient' in the 'Subject' field.
By default calour uses the mean of the ranks of each feature (over all samples), with dsFDR multiple hypothesis correction.
For more information, see **notebook** and **function doc**
```
dd=datc.diff_abundance(field='Subject',val1='Control',val2='Patient', random_seed=2018)
```
### Plotting the differentially abundant features
Let's plot to see the behavior of these bacteria.
The output of diff_abundance is an Experiment with only the significant bacteria, which are sorted by the effect size. On the bottom is the bacteria with the largest effect size (higher in Control compared to Patient).
```
dd.plot(sample_field='Subject', gui='jupyter')
```
### dbBact term enrichment
We can ask what is special in the bacteria significanly higher in the Control vs. the Patient group and vice versa.
We supply the parameter `ignore_exp=[12]` to ignore annotations regarding this experiment (expid=12) since it is already in the dbBact database.
* Note since we need to get the per-feature annotations from dbBact, we need a live internet connection to run this command.
```
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='combined',ignore_exp=[12])
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='combined',ignore_exp=[12])
```
The enriched terms are in a calour experiment class (terms are features, bacteria are samples), so we can see the
list of enriched terms with the p-value (pval) and effect size (odif)
```
enriched.feature_metadata
```
| github_jupyter |
```
import erppeek
from __future__ import print_function
SERVER = 'http://localhost:8069'
DATABASE = 'desarrollo'
USERNAME = 'companyfirebird@gmail.com'
PASSWORD = 'platano-1'
```
La documentación necesaria para poder superar este ejercicio se encuentra en la documentación de [ERPpeek](http://erppeek.readthedocs.org/en/latest/tutorial.html)
### Tarea 1 - Conexión
Demuestra que sabes conectarte a una instancia de Odoo y listar todas sus bases de datos.
```
client = erppeek.Client(server=SERVER)
for database in client.db.list():
print('Base de datos: %r' % (database,))
```
### Tarea 2 - Volcado de datos
Demuestra que puedes imprimir el `id` y el nombre de todos los usuarios (son registros del modelo `res.users`).
```
client = erppeek.Client(SERVER, DATABASE, USERNAME, PASSWORD)
proxy = client.model('res.users')
users = proxy.browse([])
for user in users:
a = "{user.id} {user.name}".format(user=user)
print(a)
```
###Tarea 3 - Crear y configurar una base de datos
Demuestra que sabes crear una base de datos, listar todos los módulos instalados por defecto, y si no está presente un módulo determinado instalarlo.
```
DATABASE = 'trabajoPython'
ADMIN_PASSWORD = 'admin'
client = erppeek.Client(server=SERVER)
if not DATABASE in client.db.list():
print("La BD no existe y se va a crear...")
client.create_database(ADMIN_PASSWORD, DATABASE)
print("Base de datos creada")
# Procedemos a listar todos los modulos instalados por defecto
installed_modules = client.modules(installed=True)
print("Lista de modulos instalados:")
for module in installed_modules['installed']:
print(module)
# Se comprueba si esta presente el modulo CRM
print("Comprobando modulo CRM...")
modules = client.modules('crm', installed=False)
if 'crm' in modules['uninstalled']:
# Si no esta instalado se instala
client.install('crm')
print("Modulo CRM instalado.")
else:
# El modulo ya esta instalado
print("El modulo CRM ya estaba instalado...")
```
### Tarea 4 - Explorar un modelo
Demuesta que sabes listar todos los campos del modelo `res.users`, incluyendo nombre, tipo y etiqueta
```
DATABASE = 'desarrollo'
client = erppeek.Client(SERVER, DATABASE, USERNAME, PASSWORD)
proxy = client.model('res.users')
users = proxy.browse([])
for user in users:
print("Usuario: {user.name}, Tipo: {user.type}, Etiqueta (alias): {user.alias_id}".format(user=user))
```
###Tarea 5 - Poblar un modelo
Crea el código neccesario para migrar los usuarios de una base de datos a otra base de datos. No es necesario migrar todos los campos. Basta con una prueba de concepto.
```
DATABASE1 = 'desarrollo'
DATABASE2 = 'sandbox'
USERNAME1 = 'companyfirebird@gmail.com'
PASSWORD1 = 'platano-1'
USERNAME2 = 'admin'
PASSWORD2 = 'admin'
origen = erppeek.Client(SERVER, DATABASE1, USERNAME1, PASSWORD1)
destino = erppeek.Client(SERVER, DATABASE2, USERNAME2, PASSWORD2)
proxy1 = origen.model('res.users')
proxy2 = destino.model('res.users')
users = proxy1.browse([])
print("Migrando usuarios entre origen y destino...")
for user in users:
login = user.login
name = user.name
password = user.password
proxy2.create({'login': login, 'name': name, 'password' : password})
print("Usuario: " + name + ". Creado correctamente")
print("Se han migrado los usuarios correctamente.")
```
| github_jupyter |
# Case Study 7
__Team Members:__ Amber Clark, Andrew Leppla, Jorge Olmos, Paritosh Rai
# Content
* [Objective](#objective)
* [Data Evaluation](#data-evaluation)
- [Loading Data](#loading-data)
- [Data Summary](#data-summary)
- [Missing Values](#missing-values)
- [Exploratory Data Analysis (EDA)](#eda)
* [Model Preparations](#model-preparations)
- [Sampling & Scaling Data](#sampling-scaling-data)
- [Evaluation Metrics](#proposed-metrics)
* [Model Building & Evaluations](#model-building)
- [Results](#performance-analysis)
* [Conclusion](#conclusion)
- [Final Model Proposal](#final_model)
- [Examining Feature Importance](#examining-feature-importance)
- [Future Considerations, Model Enhancements and Alternative Modeling Approaches](#model-enhancements)
## Objective: <a id='objective'>
The objective of this case study is to classify a binary target in an anonymous dataset with the goal of reducing monetary losses as much as possible for the customer.
# Data Evaluation <a id='data-evaluation'>
## Loading Data <a id='loading-data'>
```
# standard libraries
import os
import pandas as pd
import numpy as np
#import re
import os
from IPython.display import Image
from abc import ABC, abstractmethod
import time
import copy
#import sklearn
#import time
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from tabulate import tabulate
from IPython.display import clear_output
import xgboost
# data pre-processing
from scipy.io import arff
#from sklearn.model_selection import train_test_split
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer, KNNImputer, IterativeImputer
from sklearn.impute._base import _BaseImputer
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection._split import BaseShuffleSplit
from sklearn.datasets import load_digits
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBClassifier
# prediction models
import tensorflow as tf
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier
from sklearn.svm._base import BaseSVC
from sklearn.model_selection import cross_val_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import roc_auc_score
from sklearn.linear_model import LogisticRegression
from tensorflow.keras.metrics import AUC
# import warnings filter
import warnings
warnings.filterwarnings('ignore')
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
class FilePathManager:
def __init__(self, local_dir: str):
self.local_dir = local_dir
def retrieve_full_path(self):
return os.getcwd()+'/'+self.local_dir
class Loader:
df = pd.DataFrame()
def load_data(self, file_name):
pass
def get_df(self):
pass
def size(self):
return len(self.df)
from typing import Callable
class CSVLoader(Loader):
def __init__(self, file_path_manager: FilePathManager):
self.file_path_manager = file_path_manager
def load_data(self, _prepare_data: Callable[[pd.DataFrame], pd.DataFrame] = None):
self.df = pd.read_csv(self.file_path_manager.retrieve_full_path())
if _prepare_data:
self.df = _prepare_data(self.df)
def get_df(self):
return self.df;
def size(self):
return len(self.df)
def clean_data(df):
df['y'] = df['y'].astype(int)
df['x32'] = df['x32'].str.replace('%','').astype(float)
df['x37'] = df['x37'].str.replace('$','').astype(float)
return df
loader = CSVLoader(FilePathManager('final_project(5).csv'))
loader.load_data(clean_data)
```
## Data Summary <a id='data-summary'>
The dataset consists of fifty (50) features and a binary target class. There is no metadata or other descriptive information for the dataset, and the fifty feature labels are numbered from "x0" to "x49". There are 160,000 observations in the dataset; less than 0.03% of the features were missing data, and the imputation of these missing values is described below in the Missing Data section. Most of the features provided are numeric, but five were initially imported as text features.
Three of the five text features were identified as continents, months of the year, and days of the week. The values were cleaned up for spelling correction and consistency. The other two text object columns were numeric columns with a special character introduced in the data; column x32 had a trailing "%" and column x37 had a leading "$". These characters were removed so that these columns would be treated as numeric.
## Missing Values <a id='missing-values'>
All of the variables, except the target class, had missing values. The chart below depicts the number of observations missing values for each feature. Note: Even though the plot doesn't show missing values for categorical features, they do have missing values which are represented as nan's and so are missing from the plot.
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/missing_values.png'></img>
The number of missing values was consistently around 20-40 missing observations for each column (less than 0.03% of 160,000 observations). For the logistic regression and neural network models, the mean of each column was used to impute the missing values for the numeric data, and the mode of each column was used for the missing categorical features.
For the XGBoost model, the algorithm can automatically handle missing values and find their optimal split for modeling, so no imputation was done prior to modeling.
## Exploratory Data Analysis (EDA) <a id='eda'>
The numeric data was examined to view the scales of the variables, and the data needs normalization to be effectively used in most types of models without issues.
For two model types, logistic regression and neural network, the categorical data for the three text columns were one-hot encoded to produce binary features for each of the values within those variables. In this data, there were three continents, twelve months, and five days of the week, so the one-hot encoding process did not contribute to creating an excess of sparsity in the dataframe that would be used for modeling. After one-hot encoding, the total number of explanatory features has increased to 67.
For the third model type, XGBoost, the categorical data were not one-hot encoded but rather label-encoded so the tree-based algorithm could split the data effectively.
### Balance of Target
The target classes are considered balanced in the dataset, with roughly 40:60 split between the positive and negative classes, as depicted below.
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/y_dist.png'></img>
### Categorical Variables
The three categorical variables were x24 (continent), x29 (month), and x30 (weekday). Asia was disproportionately represented for continent, and months and weekday were both approximately normally distributed when ordered by time.
Looking at the target class, the categorical variables did not change. These are likely not strong predictors for the target variable.
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/cat_feature_dist.png'></img>
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/cat_feature_dist_by_y.png'></img>
### Continuous Variables - Scaling
Variable x37 (with \\$ values) had a very wide scale compared to other variables (-\\$5000 to \\$6000). The remaining variables still had varied scales based on the plot below. All continuous features were scaled using StandardScaler to ensure features were appropriately weighted for Logistic Regression feature importance. Scaling the data was less important for XGBoost (tree-based ensemble) and Neural Network models.
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/box_plot_ex_x37.png'></img>
# Model Preparations <a id='model-preparations'/>
```
class BaseImputer:
def fit(self, X, y=None):
pass
def transform(self, X):
pass
class BaseModel:
def fit(self, X, y, sample_weight=None):
pass
def predict(self, X):
pass
class MeanModeSimpleImputer(BaseImputer):
defaults={}
def fit(self, X, y=None):
for col in ['x24','x29','x30']:
self.defaults[col]=X[col].mode()
for col in X.columns.difference(['x24','x29','x30', 'y']):
self.defaults[col]=X[col].mean()
def transform(self, X):
X_transform = copy.deepcopy(X)
for col in X.columns.difference(['y']):
X_transform[col].fillna(value=defaults[col], inplace=True)
return X_transform
```
## Sampling and Scaling Data <a id='sampling-scaling-data'/>
```
class Modeling:
_X_train_fitted = None
_X_test_fitted = None
_y_train = None
_y_test = None
_y_preds = None
_y_preds_proba = None
def __init__(self, data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel, scaler = None, encoder = None):
self._data = data
self._target_name = target_name
self._shuffle_splitter = shuffle_splitter
self._imputer = imputer
self._model = model
self._encoder = encoder
self._X, self._y = self._split_data()
self._scaler = scaler
@property
def X(self):
return self._X
@property
def y(self):
return self._y
@property
def model(self):
return self._model
@model.setter
def model(self, model):
self._model = model
@property
def X_train(self):
return self._X_train_fitted
@property
def X_test(self):
return self._X_test_fitted
@property
def y_train(self):
return self._y_train
@property
def y_test(self):
return self._y_test
@property
def y_preds(self):
return self._y_preds
def _split_data(self):
X = self._data.copy()
return X.drop([self._target_name], axis=1) , X[self._target_name]
def _shuffle_split(self):
X = self.X
y = self.y
for train_index, test_index in self._shuffle_splitter.split(X,y):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
return X_train, X_test, y_train, y_test
def _fit_imputer(self, train):
if self._imputer is not None:
self._imputer.fit(train)
def _fit_scaler(self, train, cont_vars = None):
transfrom_cols = None
if cont_vars is None:
transform_cols = train.columns
else:
transform_cols = cont_vars
if self._scaler is not None:
self._scaler.fit(train[transform_cols])
def _impute_data(self, X: pd.DataFrame):
if self._imputer is not None:
return pd.DataFrame(self._imputer.transform(X), columns = self.X.columns, index = X.index)
return X
def _scale_data(self, X: pd.DataFrame, cont_vars = None):
transform_cols = None
if cont_vars is None:
transform_cols = X.columns
else:
transform_cols = cont_vars
scaled_data = X[transform_cols]
if self._scaler is not None:
scaled_data = pd.DataFrame(self._scaler.transform(scaled_data), columns = transform_cols, index =X.index)
X[transform_cols]=scaled_data
return X
def _encode_data(self):
df = self.X.copy()
cont_vars = df.describe().columns
cat_vars = set(df.columns) - set(cont_vars)
for column in [*cat_vars]:
df[column] = self._encoder.fit_transform(df[column].astype(str))
self._X = df
return cont_vars, cat_vars
def prepare(self):
cont_vars = None
if self._encoder is not None:
cont_vars, _ = self._encode_data()
X_train, X_test, y_train, y_test = self._shuffle_split()
self._fit_imputer(X_train)
X_train = self._impute_data(X_train)
X_test = self._impute_data(X_test)
self._fit_scaler(X_train, cont_vars)
self._X_train_fitted = self._scale_data(X_train, cont_vars)
self._X_test_fitted = self._scale_data(X_test, cont_vars)
self._y_train = y_train
self._y_test = y_test
def prepare_and_train(self):
self.prepare()
return self.train()
def train(self):
self._model.fit(self.X_train, self.y_train)
self._y_preds = self._model.predict(self.X_train)
self._y_preds_proba = self._model.predict_proba(self.X_train)
return self.metrics(self.y_train, self.y_preds, self._y_preds_proba)
def test(self):
return self.metrics(self.y_test, self._model.predict(self.X_test), self._model.predict_proba(self.X_test))
@abstractmethod
def metrics(self, y_true = None, y_pred = None, y_preds_proba = None):
pass
class ClassificationModeling(Modeling):
def __init__(self,
data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel,
scaler = None,
encoder = None,
beta: int = 1,
classification: str = 'binary'):
super().__init__(data, target_name, shuffle_splitter, imputer, model, scaler, encoder)
self.beta = beta
self.classification = classification
@abstractmethod
def metrics(self, y_true = None, y_pred = None, y_preds_proba=None):
pass
from typing import Type, TypeVar
class TuningClassificationModeling(ClassificationModeling):
TClass = None
all_models = [];
def __init__(self,
data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel,
scaler = None,
encoder = None,
beta: int = 1,
classification: str = 'binary',
classification_type: str = 'logistic'):
super().__init__(data, target_name, shuffle_splitter, imputer, model, scaler, encoder, beta, classification)
if classification_type == 'logistic':
TClass = TypeVar("TClass", bound=LogisticRegression)
elif classification_type == 'xgb':
TClass = TypeVar("TClass", bound=XGBClassifier)
elif classification_type == 'neural':
TClass = TypeVar("TClass", bound=NNModel)
def parameter_tuning(self, params, class_to_instantiate: Type[TClass]):
list_of_models = []
combination = []
params_base = {}
output = []
for key, value in params.items():
if isinstance(value, list):
combination.append((key,value))
else:
params_base[key]=value
result = {}
if len(combination) > 0:
result = TuningClassificationModeling.get_combinations(combination)
for r in result:
list_of_models.append(class_to_instantiate(**{**params_base, **r}))
for a_model in list_of_models:
self.model = a_model
startTrain = time.time()
train_metrics = self.train()
endTrain = time.time()
test_metrics = self.test()
endTest = time.time()
train_time = endTrain - startTrain
test_time = endTest - endTrain
output.append({'model': a_model, 'train_metrics': {**train_metrics,**{'elapsed_time':train_time}}, 'test_metrics': {**test_metrics,**{'elapsed_time':test_time}}})
self.all_models = output
return output
def find_best_model(self, metric):
max_accuracy = self.all_models[0]['test_metrics'][metric]
location = 0
for indx, output_metrics in enumerate(self.all_models):
if max_accuracy < output_metrics['test_metrics'][metric]:
max_accuracy = output_metrics['test_metrics'][metric]
location = indx
elif max_accuracy == output_metrics['test_metrics'][metric]:
if output_metrics['test_metrics']['elapsed_time'] < self.all_models[location]['test_metrics']['elapsed_time']:
location = indx
return self.all_models[location]
@staticmethod
def get_combinations(tuples):
length = len(tuples)
if length > 1:
total_params = []
tuple_copy = tuples.copy()
a_tuple = tuple_copy.pop(0)
params_list = TuningClassificationModeling.get_combinations(tuple_copy)
for value in a_tuple[1]:
for a_params in params_list:
temp = { a_tuple[0]: value}
total_params.append({**temp, **a_params})
return total_params
else:
params_list = []
a_tuple = tuples[0]
for value in a_tuple[1]:
temp = {}
temp[a_tuple[0]] = value
params_list.append(temp)
return params_list
def metrics(self, y_true = None, y_pred = None, y_pred_proba=None):
if y_true is None and y_pred is None:
y_true = self.y_train
y_pred = self.y_preds
conf_matrix = confusion_matrix(y_true, y_pred)
return {
'matrix': conf_matrix,
'auc': roc_auc_score(y_true, y_pred),
'accuracy': round(accuracy_score(y_true, y_pred), 5),
'precision': precision_score(y_true, y_pred, average=self.classification),
'recall': recall_score(y_true, y_pred, average=self.classification),
'f1': f1_score(y_true, y_pred),
'cost': TuningClassificationModeling.cost_calc(conf_matrix),
'y_pred': y_pred,
'y_pred_proba': y_pred_proba
}
@staticmethod
def cost_calc(conf_matrix):
cost_matrix = np.array([[0,-100],[-25,0]])
cost = np.sum(cost_matrix*conf_matrix)/np.sum(conf_matrix)
return cost
class NNModel:
model = None
epoch = 50
batch_size = 32
loss = 'BinaryCrossentropy',
metric = 'accuracy'
optimizer = 'adam'
def __init__(self,**inputs):
self.model = tf.keras.Sequential()
for arg, content in inputs.items():
if arg.startswith('input'):
self.model.add( tf.keras.layers.Input( shape=(content,) ) )
if arg.startswith('layer'):
self.model.add( tf.keras.layers.Dense(content['s'], activation = content['activation']) )
if arg == 'epoch':
self.epoch = content
if arg == 'bs':
self.batch_size = content
if arg == 'optimizer':
self.optimizer = content
if arg == 'loss':
self.loss = content
if arg == 'metric':
self.metric = content
self.model.compile(optimizer=self.optimizer, loss=self.loss, metrics=[self.metric])
print(self.model)
def fit(self, X, y):
self.model.fit(X, y, batch_size=self.batch_size, epochs=self.epoch)
def predict(self, X):
y_pred_proba = self.predict_proba(X)
return pd.Series( (y_pred_proba>0.5).astype(int))
def predict_proba(self, X):
y_pred_proba = self.model.predict(X)
return pd.Series(y_pred_proba.reshape((y_pred_proba.shape[1], y_pred_proba.shape[0]))[0])
def tune_cost_proba(train_proba, test_proba, y_train, y_test, conf_train, conf_test):
cost_results = pd.DataFrame()
thresh = 0
for i in range(11):
yhat_train = pd.Series(train_proba < thresh).astype(int)
yhat_test = pd.Series(test_proba < thresh).astype(int)
conf_train = confusion_matrix(y_train, yhat_train)
conf_test = confusion_matrix(y_test, yhat_test)
cost_results = cost_results.append({"Threshold": thresh,
"Train Cost": -TuningClassificationModeling.cost_calc(conf_train),
"Test Cost": -TuningClassificationModeling.cost_calc(conf_test),
"conf_train": conf_train,
"conf_test": conf_test
},
ignore_index=True)
thresh = thresh + 0.05
return cost_results
```
## Model Metrics <a id='proposed-metrics'/>
AUC (Area Under the Curve) and Cost Per Prediction were the model metrics. The final metric used for model evaluation was Cost per Prediction. This was calculated as follows:
__Cost per Prediction = (- \\$100×FP - \\$ 25×FN)/(Total # Predictions)__
where FP = false positive, FN = false negative.
The cost of a false positive (predicting 1 when it is actually 0) is \\$100, and the cost of a false negative (predicting 0 when it is actually 1) is \\$25. These costs are normalized by the total number of predictions so the costs can be compared between training and test sets and fairly assessed for any number of future predictions.
Before evaluating the model(s) for Cost per Prediction, the models were tuned to maximize ROC Area Under the Curve (AUC). The ROC (Receiver Operator Characteristic) curve plots the True Positive (TP) rate vs. the False Positive (FP) rate. The Area Under this Curve typically has a range of 0.5 to 1.0. A 50:50 random guess for classification would give an AUC = 0.5 with a diagonal line going from the lower left to upper right. A perfect (ideal) classifier would have an AUC = 1.0 with a line that goes straight up and then straight across.
<img src='https://raw.githubusercontent.com/olmosjorge28/QTW-SPRING-2022/main/ds7333_case_study_7/visuals/ROC_AUC_curve.png' height=400 width=400></img>
AUC was chosen as a standard metric that was quickly and easily implemented during initial model building and assessment. AUC was an appropriate metric given that the target classes are fairly balanced (40:60), and AUC is also independent of the prediction threshold which is discussed in the following paragraph.
Once the models were assessed for AUC, they were further tuned to minimize Cost per Prediction. This was done by adjusting the probability threshold for predicting a positive (1) vs. negative (0) class. The default threshold is 0.5 such that a probability < 0.5 is predicted as a negative class and ≥ 0.5 is predicted as a positive class. This threshold can be adjusted away from 0.5 such that more positive or negative classes are predicted. In this way, the number of FPs vs. FNs can be adjusted to minimize the Cost per Prediction.
# Model Building & Evaluations <a id='model-building'/>
Training and test sets were created from the data using the stratified splitting method to maintain the ratio of the binary outcome, although the class is relatively balanced between the two outcomes. 30% of the data was withheld for the test set, and the explanatory features were normalized using StandardScaler while avoiding data leakage into the test set.
## Naive Model
Given that false positives are 4 times more costly than false negatives (\\$100 vs. \\$25), a naive model would predict all negative classes to minimize cost. The naive model has a Cost per Prediction of __\\$10.03__.
```
base_model_matrix = [[28741, 0],[19259,0]]
```
#### Naive Cost
```
-TuningClassificationModeling.cost_calc(base_model_matrix)
```
## Logistic Model
Initially, logistic regression was run as a baseline model with fast implementation and high interpretability. This model did not necessarily satisfy the customer requirements of minimizing cost, but it served as a starting point to increase model complexity and improve the model performance. L1 (Lasso) regularization was used for feature selection with the logistic regression model.
### Logistic Regression
```
logistic_modeling = TuningClassificationModeling(loader.get_df(),'y',
StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=12343),
MeanModeSimpleImputer(), LogisticRegression, StandardScaler(), LabelEncoder(), beta=1)
logistic_modeling.prepare()
logistic_result = logistic_modeling.parameter_tuning( {
'penalty':'l1',
'random_state':1,
'solver': 'liblinear',
'C': [0.001, 0.01, 1, 10],
}, LogisticRegression);
```
#### Selecting Best Logistic Regression Model
```
best_logistic_model = logistic_modeling.find_best_model('auc')
best_logistic_model['model']
{ metric: best_logistic_model['train_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
{ metric: best_logistic_model['test_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
```
### Feature Importance
```
lr_tuned = linear_modeling.find_best_model('auc')
feat_coef = []
feat = zip(linear_modeling.X_train.columns, lr_tuned['model'].coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_lr = feat_coef.loc[abs(feat_coef['coef'])>0].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_lr, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.title('LR Feature Importance with L1')
plt.show()
```
#### Tuning Threshold for Lowest Cost
```
def extract_best_model_metrics(model):
return (model.find_best_model('auc')['train_metrics']['y_pred_proba'],
model.find_best_model('auc')['test_metrics']['y_pred_proba'],
model.y_train,
model.y_test,
model.find_best_model('auc')['train_metrics']['matrix'],
model.find_best_model('auc')['test_metrics']['matrix'])
train_proba, test_proba, y_train, y_test, conf_train, conf_test = extract_best_model_metrics(logistic_modeling)
logistic_cost_results = tune_cost_proba(train_proba[:,0], test_proba[:,0], y_train, y_test, conf_train, conf_test)
logistic_cost_results[['Threshold', 'Train Cost','Test Cost' ]]
def plot_cost_tunning(cost_results, threshold):
sns.lineplot(data=cost_results, x='Threshold', y='Train Cost', color='blue')
sns.lineplot(data=cost_results, x='Threshold', y='Test Cost', color='red')
plt.title('Tuning Threshold')
plt.legend(['Train', 'Test'])
plt.axvline(threshold, color='black', ls='--')
plt.show()
plot_cost_tunning(logistic_cost_results, 0.15)
```
#### Best Logistic Model Metrics
LogisticRegression with C=0.001, penalty='l1', threshold=0.15 with and a cost of __\\$9.76__ per prediction and an AUC of __0.6708__ for the test set.
## XGB Model
Next, XGBoost (eXtreme Gradient Boosting) was used as a more complex nonlinear tree-based model. This model significantly improved performance while maintaining some interpretability with feature importances. However, the XGBoost model overfit the training set such that it achieved a perfect AUC=1.0, and this resulted in a maximum test __AUC=0.9434__.
### Extreme Gradient Boosting
```
xgb_classifier = TuningClassificationModeling(loader.get_df(),'y',
StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=12343),
None, XGBClassifier, None, LabelEncoder(), beta=1,classification_type = 'xgb' )
xgb_classifier.prepare()
xgb_results = xgb_classifier.parameter_tuning( {
'max_depth': [3,6,10],
'learning_rate': [0.05, 0.1],
'n_estimators': [100, 500, 1000],
'colsample_bytree': [0.3, 0.7],
}, XGBClassifier);
```
#### Selecting Best XGB Model
```
best_xgb_model= xgb_classifier.find_best_model('auc')
best_xgb_model['model']
{ metric: best_xgb_model['train_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
{ metric: best_xgb_model['test_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
```
### Feature Importance
```
best_xgb_model = xgb_classifier.find_best_model('auc')['model']
xgboost.plot_importance(best_xgb_model, max_num_features=15)
plt.show()
```
#### Tuning Threshold for Lowest Cost
```
train_proba, test_proba, y_train, y_test, conf_train, conf_test = extract_best_model_metrics(xgb_classifier)
xgb_cost_results = tune_cost_proba(train_proba[:,0], test_proba[:,0], y_train, y_test, conf_train, conf_test)
xgb_cost_results[['Threshold', 'Train Cost','Test Cost' ]]
plot_cost_tunning(xgb_cost_results, 0.15)
```
#### Best XGB Model Metrics
XBG Classifier with max_depth= 10, learning_rate= 0.1, n_estimators= 1000, colsample_bytree= 0.7, threshold=0.15 with and a cost of __\\$2.40__ per prediction and an AUC of __0.9434__ for the test set.
## Neural Network Model
Finally, a Neural Network model was fit on the dataset, and its performance was compared against the rest of the models. This was the most complex model with the least interpretability.
### Neural Network
```
nn_modeling = TuningClassificationModeling(loader.get_df(),'y',
StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=12343),
MeanModeSimpleImputer(), NNModel, StandardScaler(), LabelEncoder(), beta=1,classification_type='neural' )
nn_modeling.prepare()
nn_model_tunning = nn_modeling.parameter_tuning( {
'input':50,
'layer1':{'s':300, 'activation': 'relu'},
'layer2':{'s':200, 'activation': 'relu'},
'layer3':{'s':100, 'activation': 'relu'},
'layer4':{'s':1, 'activation':'sigmoid'},
'loss':'BinaryCrossentropy',
'metric': tf.keras.metrics.AUC(),
'epoch':[10,30,100],
'bs':[10,100,1000,10000],
'optimizer':'adam'
}, NNModel)
```
#### Selecting Best Neural Network Model
```
best_nn_model = nn_modeling.find_best_model('auc')
{
'batch_size': best_nn_model['model'].batch_size,
'epoch': best_nn_model['model'].epoch,
'loss': best_nn_model['model'].loss,
'metric': best_nn_model['model'].metric,
'optimizer': best_nn_model['model'].optimizer,
}
best_nn_model['model'].model.summary()
{ metric: best_nn_model['train_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
{ metric: best_nn_model['test_metrics'][metric] for metric in ['auc', 'cost', 'matrix'] }
```
#### Tunning Treshold to for Lowest Cost
```
train_proba, test_proba, y_train, y_test, conf_train, conf_test = extract_best_model_metrics(nn_modeling)
nn_cost_results = tune_cost_proba(1-train_proba, 1-test_proba, y_train, y_test, conf_train, conf_test)
nn_cost_results[['Threshold', 'Train Cost','Test Cost' ]]
plot_cost_tunning(nn_cost_results, 0.05)
```
#### Best Neural Network Metrics
Neural Network Model with batch_size= 100, epoch=100, loss=BinaryCrossEntropy, metric=auc, optimizer=adam, with a threshold=0.05 with and a cost of __\\$1.96__ per prediction and an AUC of __0.9603__ for the test set.
### Results <a id='performance-analysis'>
Below are the results from the three models tried for this dataset and their comparison against predictions using the test dataset.
__Logistic Regression:__ This model was the quickest to train and had a result AUC of __0.6708__ and Cost per Prediction of __\\$9.76__ for the test dataset.
__XGBoost:__ This model was the longest to train and provided a significant improvement compared to the logistic regression. This model had a tendency to overfit showing difference in the train and test results. This model had a result AUC of __0.9434__ and Cost per Prediction of __\\$2.40__ for the test dataset.
__Neural Network:__ This model took significantly longer than the logistic, but much faster than XGB. It provided a slight improvement over the XGBModel and did not overfit to the training data. This model had a result AUC of __0.9603__ and Cost per Prediction of __\\$1.96__ for the test dataset.
#### Comparisions
The table below compares the key metrics between the models for the test dataset:
| Model |Cost Per Prediction | AUC | # False Positives | # False Negatives |
|-------|-----|-----|-------------------|-------------------|
|Logistic Regression | \\$9.76 | 0.6708 | 163 | 18043 |
|XGBoost | \\$2.40 | 0.9434 | 452 | 2797 |
|Neural Network | \\$1.96 | 0.9603 | 587 | 1422 |
```
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
models = ['Logistic Regression', 'XGBoost', 'Neural Network']
costs = [9.76, 2.40, 1.96]
ax.bar(models, costs)
plt.ylabel("Cost Per Prediction")
plt.show()
```
# Conclusion <a id='conclusion'>
## Final Model <a id='final_model'>
The team recommends using the Neural Network model. This model has an input layer and 3 hidden layers, with 300, 200, and 100 neurons, respectively, that use 'Relu' for the activation function and 1 output layer that uses sigmoid for its activation function. This model provided the best fit (AUC), which was then tuned for lowest overall cost.
### Monetary Outcome
The team recommends using the Neural Network model to minimize the Cost per Prediction. The Neural Network model had cost per prediction of \\$1.96. When compared to the naive model with a cost per prediction of \\$10.03 is an 80.4\% improvement in cost, compared to the Logistic Model that had a cost per prediction of \\$9.76 is a 79.9\% improvement, and compared to the XGBBoost model which had a cost per prediction of \\$2.40 it's an 18\% improvement. Using the recommended model yields an average cost per prediction of less than \\$2.00.
__If the customer were to make 1000 predictions using the recommended model vs. the naive approach, the customer would save over ~\\$8000.__
### Feature Importance <a id='examining-feature-importance'>
Even though the stakeholder is not interested in the key features for prediction, below are the feature importances according to the Logistic and XGB Models. The logistic feature importance accounts for features that have a linear relationship for predicting the target variable. The XGBoost feature importance differs significantly from the logistic model because the target variable is much better predicted by its non-linear terms. There were 50 total features, of which 7 appear to be the most important for the logistic model (abs coef > 0.01) vs. 14 features for the XGBoost (F-Score > 4000).
#### Logistic Feature Importance
```
lr_tuned = linear_modeling.find_best_model('auc')
feat_coef = []
feat = zip(linear_modeling.X_train.columns, lr_tuned['model'].coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_lr = feat_coef.loc[abs(feat_coef['coef'])>0].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_lr, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.title('LR Feature Importance with L1')
plt.show()
```
#### XGB Feature Importance
```
best_xgb_model = xgb_classifier.find_best_model('auc')['model']
xgboost.plot_importance(best_xgb_model, max_num_features=20)
plt.show()
```
### Future Considerations, Model Enhancements and Alternative Modeling Approaches <a id='model-enhancements'/>
To make the model more generalizable, the team recommend in the future using and tuning dropout rates for the neural network model. Also, a small improvement could be made by making an ensemble model. Lastly, the team recommends talking to domain experts to better understand features that could allow for better feature engineering to further reduce potential losses.
| github_jupyter |
```
import psycopg2
database = "rssfeed"
hostname="rssfeed.cjgj2uy1bapa.us-east-1.rds.amazonaws.com"
port="5432"
userid="postgres"
passwrd=""
conn_string = "host="+hostname+" port="+port+" dbname="+database+" user="+userid+" password="+passwrd
conn = psycopg2.connect(conn_string)
conn.autocommit=True
cursor = conn.cursor()
sqlSelect = "SELECT * FROM rss_entities_with_pub";
cursor.execute(sqlSelect);
rows = cursor.fetchall();
## Publisher url codes
import pandas as pd
from tqdm import tqdm
publishers = pd.read_json('data/rss_feed.json', typ='series')
publishers = publishers.to_dict()
pub = {v: k for k, v in publishers.items()}
rssdf = pd.DataFrame(rows)
rssdf.columns = ['publisher_code','entity','publish_time']
rssdf["publisher"] = rssdf["publisher_code"].map(pub)
rssdf.head()
from datetime import datetime
rssdf['24hour_bucket'] = pd.to_datetime(rssdf['publish_time'],errors='coerce',format = '%Y-%m-%dT%H:%M:%S+00:00',\
infer_datetime_format = True, cache = True)
print(rssdf['publish_time'][0])
rssdf.head()
rssdf['dates']= rssdf['24hour_bucket'].dt.date
rssdf["dates"] = pd.to_datetime(rssdf['dates'])
rssdf.head(3)
```
## Generate Dataset
```
import datetime
import numpy as np
newDf = pd.DataFrame(rssdf.loc[:, ["entity", "dates"]])
newDf = newDf.sort_values(by = "dates")
newDf = newDf.reset_index(drop=True)
newDf.head(3)
# selecting date range
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
newDf = newDf[ newDf.dates > pd.datetime(2017,1, 1)]
newDf = newDf[newDf.dates<pd.datetime(2020,3,8)]
newDf = newDf.reset_index(drop=True)
newDf.head(3)
import datetime as dt
time_range = pd.date_range(start = "2017-01-01", end = "2020-03-08", freq = "D").to_series()
list_range = list(zip(time_range.dt.strftime('%Y-%m-%d'), time_range.shift(-4).dropna().dt.strftime('%Y-%m-%d')))
list_range[-1][1]
```
## Summary
Thoughts: For each row (7-day period), I have a collection of entities and rank they by their frequencies in each slicing window.
Rank an entity by its frequency over the entire dataset and the frequency amony the slicing windows and its frequency in its 7-day time window, then assign some weight to each time window. (Simliar to TF-IDF)
1. Score for slicing window: 30 - 1
2. Score for entity in each slicing window: 10 - 1
3. Weight for each entity in all slicing window: number of entity appeared times / number of total entity appears times
Final score for the entity: (part_1 + part_2) * part_3
## Generate top 10 high frequency entity for each slicing window
```
from collections import defaultdict
import operator
def generate_entity(df, list_range):
slicing_window = []
time_index = 0
start_range = [dt.datetime.strptime(list_range[0][0], '%Y-%m-%d').date(), dt.datetime.strptime(list_range[0][1], '%Y-%m-%d').date()]
dic = defaultdict(int)
total_dict = defaultdict(int)
total_entity = 0
#max_freq = 0
for i in tqdm(range(df.shape[0])):
time = df.loc[i,"dates"]
if time > start_range[1]:
dic = dict(sorted(dic.items(), key=operator.itemgetter(1),reverse=True))
# max_freq = max(len(dic.keys() ), max_freq)
slicing_window.append(dic )
dic = defaultdict(int)
time_index += 1
start_range = [dt.datetime.strptime(list_range[time_index][0], '%Y-%m-%d').date(), dt.datetime.strptime(list_range[time_index ][1], '%Y-%m-%d').date()]
#max_freq = max(len(dic.keys() ), max_freq)
item = df.loc[i, "entity"]
total_dict[item] += 1
dic[item] += 1
total_entity += 1
total_dict = dict(sorted(total_dict.items(), key=operator.itemgetter(1),reverse=True))
#generate_df = pd.DataFrame(data = None, columns = ["Time_range"] + ["Entity" + str(i) for i in range(1, max_freq + 1)])
generate_df = pd.DataFrame(data = None, columns = ["Time_range"] + ["Entity" + str(i) for i in range(1, 11)])
generate_df["Time_range"] = list_range
for i in tqdm(range(len(slicing_window))):
temp = slicing_window[i]
#generate_df.loc[i,1:] = [ i for i in temp.keys()] + [None] * (generate_df.shape[1] - 1 - len(list(temp.keys())))
temp = list(temp.keys())
if len(temp) < 10:
temp += [None] * (10 -len(temp))
generate_df.loc[i,1:] = temp[:10]
return generate_df , total_dict, total_entity
entityCollection, totalDict, countEntity = generate_entity(newDf, list_range)
entityCollection.head()
entityCollection.iloc[0,1]
countEntity
```
## Scoring the entity from the recent month
```
from sklearn import preprocessing
def scoring(totalDict, number,entityCollection, countEntity):
unique_entity = defaultdict(int)
current_slice = 1/30
for i in tqdm(range(len(entityCollection) - 7*30, len(entityCollection))):
for j in range(1, 11):
entity = entityCollection.iloc[i, j]
if entity:
current_score = (11 - j + current_slice)
current_slice += 1/30
unique_entity[entity] = max(unique_entity[entity], current_score)
for key in list(unique_entity.keys())[:-1]:
new_score = unique_entity[key] * totalDict[key] / countEntity
unique_entity[key] = new_score
df = pd.DataFrame.from_dict(unique_entity, orient='index')
df = df[:-1].reset_index()
df.columns = ["Entity", "Score"]
min_max_scaler = preprocessing.MinMaxScaler()
x = df[['Score']].values.astype(float)
x_scaled = min_max_scaler.fit_transform(x)
df_normalized = pd.DataFrame(x_scaled).apply(lambda x: round(x * 100, 2))
df[["Score"]] = df_normalized
df = df.sort_values('Score', ascending= False).reset_index().drop("index", axis = 1)
return df
entity_rank = scoring(totalDict, 30, entityCollection, countEntity)
entity_rank.head(30)
events_df = pd.read_csv('events_df.csv')
events_df.head(10)
#@title Mapping entity to events
import ast
def generate_event_dic(events_df):
"""
Generate a dictionary which map entity to the event.
Assume no overlapping entity among these events.
"""
event_collection = events_df.features.unique()
event_map = {}
for events in event_collection:
event = ast.literal_eval(events)
for entity in event:
if entity:
event_map[entity] = events
return event_map
event_dic = generate_event_dic(events_df)
def generate_event(entity_rank, event_dic):
event = []
for i in range(entity_rank.shape[0]):
if entity_rank.iloc[i, 0] in event_dic:
event.append(event_dic[entity_rank.iloc[i, 0]])
else:
event.append(entity_rank.iloc[i, 0])
entity_rank["Event"] = event
return entity_rank
entity_rank2 = generate_event(entity_rank, event_dic)
entity_rank2.head(20)
```
## Improve
This is a simple but efficient approach to socre the entities. Here we can see it really catched some current event such as coronavirus and Kobe Bryant. Here we want to capture the event through entities, we should find some way to combine the most frequncy entity to more specifc event.
| github_jupyter |
The iexfinance API seems to be not working. For now, this example does not work.
```
%load_ext autoreload
%autoreload 2
import numpy as np; np.random.seed(1)
import matplotlib.pyplot as plt
import pandas as pd
from extquadcontrol import dp_finite, dp_infinite, ExtendedQuadratic, \
FiniteHorizonSystem, InfiniteHorizonSystem, AffinePolicy, Policy, TimeInvariantAffinePolicy
from scipy.linalg import block_diag
import cvxpy as cvx
from iexfinance import get_historical_data
from datetime import datetime
import pandas as pd
def extended_quadratic_to_cvx(f, x):
if f.F.shape[0] == 0:
return .5 * cvx.quad_form(x, f.P) + f.q * x + .5 * f.r, []
else:
return .5 * cvx.quad_form(x, f.P) + f.q * x + .5 * f.r, [f.F * x + f.g == 0]
# import quandl;
# quandl.ApiConfig.api_key = "INSERT API KEY HERE"
tickers = ['FXI','GDX','IWM','QQQ','SPY','XLF','XOP']
n = len(tickers)
P = []
for t in tickers:
df = get_historical_data(t, output_format='pandas')
P.append(np.array(df.open)[:,None])
prices = np.concatenate(P,axis=1)
vix = np.array(quandl.get('CHRIS/CBOE_VX5',start_date='2011-01-01').Open)[-prices.shape[0]:]
vix = vix[:-1]
prices = prices[1:]
i = 0
for p in prices.T:
plt.plot(p,label=tickers[i])
i += 1
plt.legend()
plt.plot(vix)
dvix = np.diff(vix)/vix[:-1]
dvix = dvix[:-1]
plt.plot(dvix)
plt.hist(dvix, bins=50);
ranges = np.r_[
-np.logspace(-2.5,np.log10(-np.min(dvix)+1e-4),3)[::-1],
np.logspace(-2.5,np.log10(np.max(dvix)+1e-4),3)
]
np.set_printoptions(precision=3)
ranges
cut = pd.cut(dvix,ranges,labels=np.arange(len(ranges)-1))
cut.value_counts()
K = len(cut.value_counts())
Pi = 0*np.ones((K,K))
for i in range(len(cut)-1):
Pi[cut[i+1],cut[i]] += 1
Pi/=np.sum(Pi,axis=0)
plt.imshow(Pi)
Pi
# Pi = np.eye(K)
returnslog = np.log(1+np.diff(prices,axis=0)/prices[:-1])
returnslog = returnslog[1:]
mus = []
sigmas = []
means = []
covs = []
for i in range(K):
mu = np.mean(returnslog[cut==i],axis=0)
sigma = np.cov(returnslog[cut==i].T) + 1e-10*np.eye(n)
mus.append(mu)
sigmas.append(sigma)
mean = np.exp(mu + .5*np.diag(sigma))
covariance = np.diag(mean)@(np.exp(sigma)-np.ones((n,n)))@np.diag(mean)
means.append(mean)
covs.append(covariance)
means
np.set_printoptions(precision=5,suppress=True)
plt.figure(figsize=(14,10))
for i in range(n):
plt.plot([m[i] for m in means], c='black')
plt.figure(figsize=(14,10))
for i in range(n):
plt.plot([np.sqrt(c[i,i]) for c in covs], c='black')
means[4]
covs[4]
w = cvx.Variable(n)
ws = []
for gamma in np.logspace(-1,5,100):
objective = -means[2]*w + gamma/2*cvx.quad_form(w,covs[2])
constraints = [np.ones(n)*w == 1, w >=0]
problem = cvx.Problem(cvx.Minimize(objective),constraints)
result = problem.solve(solver='OSQP')
ws.append(np.array(w.value)[:,None])
ws = np.concatenate(ws,axis=1)
for i in range(n):
plt.semilogx(np.logspace(-1,5,100),ws[i], label=tickers[i])
plt.legend()
b = [prices[-1]*1e-7 for _ in range(K)]
gamma = 1e-1
T = 30
N = 30
def sample(t,N):
As = np.zeros((N,K,n,n))
Bs = np.zeros((N,K,n,n))
cs = np.zeros((N,K,n))
for s in range(K):
mu,sigma=mus[s],sigmas[s]
r = np.exp(np.random.multivariate_normal(mu,sigma,size=N))
for i in range(n):
As[:,s,i,i] = r[:,i]
Bs[:,s,i,i] = r[:,i]
gs = []
for s in range(K):
mean,cov = means[s],covs[s]
g = ExtendedQuadratic(np.c_[np.r_[gamma*cov,gamma*cov],np.r_[gamma*cov,gamma*cov+np.diag(b[s])]],-np.r_[mean,mean],0,np.c_[np.zeros((1,n)),np.ones((1,n))],np.zeros(1))
gs.append(g)
return As,Bs,cs,[gs for _ in range(N)],Pi
sample_infinite = lambda N: sample(0,N)
g_T = [ExtendedQuadratic(np.zeros((n,n)),np.zeros(n),0) for s in range(K)]
Vs, Qs, policies = dp_finite(sample,g_T,T,N)
V = Vs[0]
Q = Qs[0]
policy = policies[0]
Kpolicy,kpolicy = policy[0]
# Kpolicy
xstar = -np.linalg.lstsq(Kpolicy,kpolicy,rcond=None)[0]
np.allclose(-Kpolicy@xstar,kpolicy)
np.set_printoptions(precision=3,suppress=False)
for i in range(n):
print (tickers[i], (xstar/np.sum(xstar))[i])
np.linalg.eigvals(Kpolicy)
system = InfiniteHorizonSystem(sample_infinite,K)
final_wealths = []
np.random.seed(1)
for i in range(100):
Xs, Us, Modes, cost = system.simulate(1000*np.ones(n),3,30,TimeInvariantAffinePolicy(policy))
if i <= 15:
plt.plot(np.sum(Xs,axis=1),color='black')
final_wealths.append(np.sum(Xs,axis=1)[-1])
plt.ylabel('total value')
plt.xlabel('t')
plt.savefig('figs/portfolio1.pdf')
Xs.shape
# Xs/np.sum(Xs,axis=1)[:,None]
i = 0
for x in (Xs/Xs.sum(axis=1)[:,None]).T:
plt.plot(x, label=tickers[i])
i += 1
plt.legend()
plt.ylabel('allocation')
plt.xlabel('t')
plt.savefig('figs/portfolio2.pdf')
plt.hist(np.array(final_wealths)/(1000*n)-1.,bins=30,color='black',density=True);
plt.xlim(-.125,.125)
plt.ylabel('count')
plt.xlabel('returns')
plt.savefig('figs/portfolio3.pdf')
100*(np.mean(np.array(final_wealths)/(1000*n))-1)
np.std(100*np.array(final_wealths)/(1000*n))
plt.step(np.arange(len(Modes)),np.array(Modes)+1, color='black')
plt.ylabel('mode')
plt.xlabel('t')
plt.savefig('figs/portfolio4.pdf')
# plt.plot(np.sum(np.abs(Us),axis=1),color='black')
# plt.ylabel('trading volume')
# plt.xlabel('t')
# plt.savefig('figs/portfolio3.pdf')
class LongOnlyPolicy(Policy):
def __init__(self, Q):
self.Q = Q
def __call__(self, t, x, s):
Q = self.Q[s]
Q = Q.affine_composition(np.r_[np.zeros((n,n)),np.eye(n)],np.r_[x,np.zeros(n)])
u = cvx.Variable(n)
Q, constraints = extended_quadratic_to_cvx(Q, u)
problem = cvx.Problem(cvx.Minimize(Q), constraints+[x+u>=1e-3])
result = problem.solve(solver='OSQP')
return np.array(u.value).flatten()
np.random.seed(1)
final_wealths = []
for _ in range(20):
Xs, Us, Modes, cost = system.simulate(1000*np.ones(n),3,T,LongOnlyPolicy(Q))
plt.plot(np.sum(Xs,axis=1),color='black')
final_wealths.append(np.sum(Xs,axis=1)[-1])
plt.savefig('figs/portfolio1-longonly.pdf')
np.mean(np.array(final_wealths)/(1000*n))
np.std(np.array(final_wealths)/(1000*n))
i = 0
for x in (Xs/Xs.sum(axis=1)[:,None]).T:
plt.plot(x, label=tickers[i])
i += 1
plt.legend()
plt.savefig('figs/portfolio2-longonly.pdf')
plt.step(np.arange(len(Modes)),Modes, color='black')
plt.savefig('figs/portfolio4-longonly.pdf')
plt.plot(np.sum(np.abs(Us),axis=1),color='black')
plt.savefig('figs/portfolio3-longonly.pdf')
```
| github_jupyter |
```
import requests
import numpy as np
import pandas as pd
from pathlib import Path
from RISparser import read, TAG_KEY_MAPPING, LIST_TYPE_TAGS
# visualization
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
```
## Read files from Zenodo
```
url_included = "https://zenodo.org/record/3625931/files/DOKU_All%20Included_20200116_cap.txt"
url_abstract_screening = "https://zenodo.org/record/3625931/files/DOKU_All%20FT-Screening_20200116_cap.txt"
url_all = "https://zenodo.org/record/3625931/files/DOKU_All%20TiAb-Screening_20200116_cap.txt"
list_keys = [TAG_KEY_MAPPING[k] for k in LIST_TYPE_TAGS]
def read_ris_to_df(url):
"""Read RIS and return pandas DataFrame"""
# download data and split into lines
r = requests.get(url)
r.encoding = "utf-8-sig"
lines = r.text.split("\r\n")
# merge the field with multiple values
items = []
for item in read(lines):
for k, v in item.items():
if k in list_keys and item[k] is not None:
item[k] = ";".join(item[k])
items.append(item)
return pd.DataFrame(items)
df_all = read_ris_to_df(url_all)
df_abstract_screening = read_ris_to_df(url_abstract_screening)
df_included = read_ris_to_df(url_included)
```
## Clean datasets
### Step 1: Add `record_id`, `label_included` & `label_abstract_screening`
```
# add record_id
df_all.insert(0, "record_id", df_all.index + 1)
# titles of inclusions after full text screening
included_title = df_included["title"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower()
# authors of inclusions after full text screening
included_authors = df_included["authors"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower()
# titles of inclusions after abstract screening
abstract_screening_title = df_abstract_screening["title"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower()
# authors of inclusions after abstract screening
abstract_screening_authors = df_abstract_screening["authors"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower()
# check if included records are missing from the full dataset
df_all = df_all.assign(title_clean=df_all["title"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower())
df_all = df_all.assign(authors_clean=df_all["authors"].str.replace("[^A-Za-z0-9]", "", regex=True).str.lower())
print("Papers included, missing from full dataset: ", (~included_title.isin(df_all["title_clean"]) & ~included_authors.isin(df_all["authors_clean"])).sum(), "\n")
print("Papers in abstract screening, missing from full dataset: ", (~abstract_screening_title.isin(df_all["title_clean"]) & ~abstract_screening_authors.isin(df_all["authors_clean"])).sum(), "\n")
# add labels
label_included = df_all["title_clean"].isin(included_title) & df_all["authors_clean"].isin(included_authors)
label_abstract_screening = (df_all["title_clean"].isin(abstract_screening_title) & df_all["authors_clean"].isin(abstract_screening_authors)) | label_included
df_all = df_all.assign(label_included=label_included.astype(int), label_abstract_screening=label_abstract_screening.astype(int))
```
### Step 2: Find duplicate records and add `duplicate_record_id`
```
# find duplicates based on title and authors
df_all.sort_values(["label_included", "label_abstract_screening"], ascending=False, inplace=True)
duplicate = df_all.duplicated(subset=["title_clean", "authors_clean"])
df_all["duplicate_record_id"] = np.where(duplicate, 1, np.nan)
# if duplicate, duplicate_id indicates the corresponding record_id, otherwise NA
for i in range(len(df_all)):
if df_all.loc[i, "duplicate_record_id"] == 1:
df_all.loc[i, "duplicate_record_id"] = df_all.loc[~duplicate & df_all.loc[~duplicate, "title_clean"].isin([df_all.loc[i, "title_clean"]]) & df_all.loc[~duplicate, "authors_clean"].isin([df_all.loc[i, "authors_clean"]]), "record_id"].tolist()
df_all.duplicate_record_id = df_all.duplicate_record_id.astype("Int64")
```
### Step 3: Sort by orginal order and retain useful columns
```
df_all.sort_values("record_id", inplace=True)
df_all = df_all[["record_id", "title", "abstract", "keywords", "authors", "year", "date", "doi", "label_included", "label_abstract_screening", "duplicate_record_id"]]
```
## Export datasets
```
Path("output").mkdir(parents=True, exist_ok=True)
df_all.to_csv("output/Appenzeller-Herzog_2020.csv", index=False)
```
## Dataset statistics
### Summary of inclusions and exclusions
```
n = len(df_all)
n_dup = (~df_all["duplicate_record_id"].isna()).sum()
n_wo_dup = n - n_dup
n_inc = df_all.loc[df_all.duplicate_record_id.isna(), "label_included"].sum()
n_inc_abs = df_all.loc[df_all.duplicate_record_id.isna(), "label_abstract_screening"].sum()
n_exc = n_wo_dup - n_inc
n_exc_abs = n_wo_dup - n_inc_abs
n_exc_full = n_inc_abs - n_inc
print("Total number of papers: ", n, "(includes", n_dup, "duplicates) \n")
print("Total number of papers without duplicates: ", n_wo_dup, "\n\n")
print("Following statistics calculated without duplicates: \n")
print("Total number of EXCLUSIONS: ", n_exc, "\n")
print("Total EXCLUSIONS after abstract screening: ", n_exc_abs, "\n")
print("Total INCLUSIONS after abstract screening: ", n_inc_abs, "\n")
print("Total EXCLUSIONS after full text screening: ", n_exc_full, "\n")
print("Total INCLUSIONS after full text screening: ", n_inc, " (", round(100*n_inc/n_wo_dup, 2), "% )\n")
# number of papers over years
df_all.groupby("year").size().reset_index(name="count").set_index("year").plot(figsize=(15,5))
plt.title("Number of papers over years")
plt.show()
```
### Missingness of title and abstract
```
print("Number of papers with missing title: ", df_all["title"].isna().sum(), "\n")
print("Number of papers with missing abstract: ", df_all["abstract"].isna().sum(), "\n")
print("Number of papers with missing title AND abstract: ", (df_all["title"].isna() & df_all["abstract"].isna()).sum(), "\n")
print("Number of papers with missing title OR abstract: ", (df_all["title"].isna() | df_all["abstract"].isna()).sum(), "\n\n")
# missing abstract over years
df_all["abstract"].isna().groupby(df_all["year"]).sum().astype(int).\
reset_index(name="count").set_index("year").\
plot(figsize=(15,5))
plt.title("Number of papers with missing abstract over years")
plt.show()
```
### Word cloud for titles and abstracts
```
# create stopword list
stopwords = set(STOPWORDS)
stopwords.update(["patient", "patients", "result", "results", "conclusion", "conclusions"])
# create word cloud text
title_text = " ".join(word for word in df_all.title.dropna())
abstract_text = " ".join(word for word in df_all.abstract.dropna())
print("There are {} words in the combination of all titles.".format(len(title_text)), "\n")
print("There are {} words in the combination of all abstracts.".format(len(abstract_text)), "\n")
# generate word cloud images
title_wordcloud = WordCloud(stopwords=stopwords, max_words=100, background_color="white").generate(title_text)
abstract_wordcloud = WordCloud(stopwords=stopwords, max_words=100, background_color="white").generate(abstract_text)
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=[15, 15])
ax1.imshow(title_wordcloud, interpolation="bilinear")
ax1.set_title("Titles of the dataset")
ax1.axis("off")
ax2.imshow(abstract_wordcloud, interpolation="bilinear")
ax2.set_title("Abstracts of the dataset")
ax2.axis("off")
plt.show()
```
| github_jupyter |
# Methodological approach
### Models
- Baseline (TF-IDF + SVM with preprocessing): Train + Crossvalidation (default, 5-folds)
- Transformers: Validation is random sample of Train (10%). No cross-validation implemented yet, since not trivial
Both model classes use _class weights_ to address class imbalance problem and increase the effect on loss for minority classes. Uses the inverse of how many times a class is positive relative to negative, i.e. n_samples / (n_classes * np.bincount(y)
### Evaluation approach
1) Train the models on the train set
2) On each epoch, run evaluation with validation set and evaluation metric the Precision-Recall AuC
3) Load at the end of training the model at the best performing epoch checkpoint.
4) Do hyperparameter search and select the best model (of each model type)
5) Predict the class labels for the __entire__ train set (so train + validation) and calculate the ROC/PR curves and AuC.
6) Calculate the J-Stat (optimizing point in ROC space) and the F1 maximizing point in the Precision-Recall space
7) Set class thresholds according to the F1 maximizing point
8) Predict on the test set for model comparison (among model types)
### Scenarios
The following "scenarios" are defined:
- _optimistic_: Use only _positive_ labels for training, validation and test. This should give an "ceiling benchmark" of how good the _positive_ paragraphs can be separated/classified among themself.
- _efficient_realistic_: Use _Opportunities_ as _negative_ labels in training, use all negatives in test.
- _realistic_: Use all negatives (opportunities, "weak" and "strong" from labelling process) for train and test.
| scenarios | train | test |
| ------------- |:---------------:| --------------:|
| optimistic | P: 279 N: 0 | P: 56 N:0 |
| efficient | P: 279 N: 825 | P: 56 N: 28155 |
| realistic | P: 279 N: 27533| P: 56 N: 28155 |
--> Note: Positives are counts of paragraphs with at least one positive label and negatives are those with all 0's.
### Tasks
- _binary_: Identification of CR relevant/irrelevant as baseline task.
- _multi_label_cro_: Classification task of _Physical Risks_ and _Transition Risks_ with multi-labelling
- _multi_label_cro_sub_: Classification task of the 5 sub categories from _PR_ and _TR_
_multi_label_cro_sub_ is done as a second step after a _binary_ identification step. Train: All positives, Test: Overall Test dataset, where paragraphs that received a negative in the previous step are set to negative and included in the evaluation metrics to simulate a real word scenario, where the first step would act as a filter. Results are still pending here, since there seems to be an issue loading pretrained models https://github.com/huggingface/transformers/issues/8272.
# Results
Tables for each task below. As we know already, models perform well in the "naive" scenarios and bad once the negatives are considered. Performance improves once the full negative training data is provided. The relatively small and efficient distilbert-base-uncased performs best, beating the baseline but also roberta-large (in fact also other, bigger models. Needs certainly some investigation). Best PR AuC and F1-Score (at the threshold set according to the train results) for the test is at 48% for the binary task and at/below 40% for the
# Questions
- Cross-validation with Transformers: What do we gain? More robust estimates of the validation metrics? Which model do we load at the end?
- Step 3 in Evaluation approach. For some models/scenarios, if more than 2-3 epochs are used, the eval loss starts increasing while the ROC-AuC/PR-AuC also increase, suggesting overfitting. Should we switch back to "loss" for best model selection?
- Step 4/5, selection of the thresholds: Is that o.k to run it on the entire train set? Alternative: Only validation set...
# Way forward (not in scope of the thesis)
- More data: Maybe we can invert the problem, e.g. consider the train/test set as test set, since there we have "negatives" (and as such a ground truth) and then start collecting training data (were we do not really have to have the ground truth)
- Revisit the paragraph approach: Split paragraphs in sentences
- Investigate labelled data after prediction, i.e. look at the most confusing examples etc to maybe find a pattern or label correction
# Dataset
```
%load_ext autoreload
%autoreload 2
import sys
import os
import pandas as pd
sys.path.append('..')
from data import constants
from data import cro_dataset
from data.utils import tables
ds_pos = cro_dataset.prepare_datasets(
cro_category_level="cro_sub_type_combined", #cro_sub_type_combined
should_filter_op=True,
train_neg_sampling_strategy=None,
test_neg_sampling_strategy=None,
as_huggingface_ds=True
)
ds_neg = cro_dataset.prepare_datasets(
cro_category_level="cro_sub_type_combined", #cro_sub_type_combined
should_filter_op=True,
train_neg_sampling_strategy="all",
test_neg_sampling_strategy="all",
as_huggingface_ds=True
)
ds_op = cro_dataset.prepare_datasets(
cro_category_level="cro_sub_type_combined", #cro_sub_type_combined
should_filter_op=False,
train_neg_sampling_strategy=None,
test_neg_sampling_strategy=None,
as_huggingface_ds=True
)
# Also read the negatives from the adjunct fix
ds_train_neg = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_Training_Negative_incl_adjunct.pkl")
number_adjunct_fix = len(ds_train_neg.query("is_adjunct == True"))
class_labels = ds_pos['train'].features['labels'].feature.names
train_df = pd.DataFrame(data=ds_pos['train']['labels'], columns=class_labels)
test_df = pd.DataFrame(data=ds_pos['test']['labels'], columns=class_labels)
df_labels = pd.DataFrame(data={"Training": train_df.sum(), "Test": test_df.sum() })
df_labels.rename(index=constants.map_to_field(), inplace=True)
df_labels.loc["Positive paragraphs"] = [ ds_pos['train'].num_rows, ds_pos['test'].num_rows]
df_labels.loc['Negative paragraphs'] = [ ds_neg['train'].num_rows - ds_pos['train'].num_rows + number_adjunct_fix, ds_neg['test'].num_rows - ds_pos['test'].num_rows]
df_labels.loc['Opportunities'] = [ ds_op['train'].num_rows - ds_pos['train'].num_rows, ds_op['test'].num_rows - ds_pos['test'].num_rows ]
tables.export_to_latex(df_labels, filename="labels_dataset.tex")
```
# Results
```
import os
import re
import pandas as pd
RESULT_DIR = "/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Methodology/results"
try:
import google.colab
is_running_in_colab = True
except:
is_running_in_colab = False
if is_running_in_colab:
# Load Google drive where the data and models are stored
from google.colab import drive
drive.mount('/content/drive')
RESULT_DIR = "/content/drive/MyDrive/fin-disclosures-nlp/results/"
scenarios = ["optimistic", "efficient-realistic", "realistic"]
models = ["svm", "distilbert-base-uncased", "roberta-large"]
prefixes = ['test', 'train'] # 'eval' would also be there, however overlapping 'eval_roc_auc', 'eval_pr_auc',
report_columns = ['train_ROC AuC', 'train_PR AuC', 'test_ROC AuC', 'test_PR AuC', 'test_F1']
def sort_first_idx(idx):
mapper = {name: order for order, name in enumerate(scenarios)}
return idx.map(mapper)
def sort_second_idx(idx):
mapper = {name: order for order, name in enumerate(models)}
return idx.map(mapper)
def report_results(df):
df[['scenario', 'model']] = df.id.str.split('_', 1, expand=True)
# Set the row multi-index
df = df.set_index(['scenario', 'model'])
df = df.sort_index(key=sort_second_idx, level="model", sort_remaining=False).sort_index(key=sort_first_idx, level="scenario", sort_remaining=False)
df = df[[r for r in report_columns if r in df.columns ]]
# Set the column multi-index
first_lvl = []
second_lvl = []
for c in df.columns:
splits = c.split("_", 1)
first = splits[0] if splits[0] in prefixes else ""
second = splits[1] if splits[0] in prefixes else c
first_lvl.append(first)
second_lvl.append(second)
df.columns = [first_lvl, second_lvl]
df = df.round(3)
return df
binary_df = pd.read_csv(os.path.join(RESULT_DIR, "cro_sub_type_combined_binary_results.csv"))
multilabel_df = pd.read_csv(os.path.join(RESULT_DIR, "cro_multi-label_results.csv"))
print("Binary Task: ")
binary_report = report_results(binary_df)
binary_report
print("Multi-Label Task: ")
multilabel_report = report_results(multilabel_df)
multilabel_report
results_df = binary_report.merge(multilabel_report, left_index=True, right_index=True, suffixes=('_binary', '_multilabel'))
tables.export_to_latex(results_df, filename="methodology_results.tex")
results_df
import os
from IPython.display import Image
print("Note: These ROC and P-R curves are the plots after training, on the entire TRAIN set and are used to find the optimal threshold values (dot).")
path = "/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Methodology/results/figures/"
Image(filename = os.path.join(path, "cro_sub_type_combined_binary_realistic_distilbert-base-uncased_train_threshold.pdf.jpg"))
from IPython.display import Image
print("Confusion matrix on the test set...")
path = "/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Methodology/results/figures/"
Image(filename = os.path.join(path, "cro_sub_type_combined_binary_realistic_distilbert-base-uncased_test_evaluation.pdf.jpg"))
```
# 2 Stage: use the "test_predictions.pkl" in the labels folder, the thresholds from the results folders (or here:
1Stage: [0.69809985]
2Stage: [0.19047296, 0.25015372, 0.36645192, 0.27023202, 0.2140553])
1. Plot the confusion matrix of the first stage
2. Plot the decision threshold of the second stage
3. Plot the confusion matrix of the combined second stage of all 5 categories and converted to main categories
```
import pandas as pd
two_stage = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Methodology/data/labels/test_predictions.pkl")
two_stage.binary_pred_labels
two_stage.multilabel_pred_labels
```
| github_jupyter |
# Four Muon Spectrum
This code is another showcase of the awkward array toolset, and utilizing coffea histograms in addition to advanced functionality.
This shows the analysis object syntax implemented by coffea `JaggedCandidateArray`, along with a multi-tiered physics selection, and the usage of an accumulator class provided by FCAT. We now add in the concept of corrections as well in the case of a Monte-Carlo sample.
```
import time
from coffea import hist
from coffea.analysis_objects import JaggedCandidateArray
import coffea.processor as processor
from awkward import JaggedArray
import numpy as np
# uproot supports xrootd, but its nicer to have them local (about 7 GB)
!mkdir -p data
!xrdcp root://eospublic.cern.ch//eos/root-eos/cms_opendata_2012_nanoaod/Run2012B_DoubleMuParked.root data/
!xrdcp root://eospublic.cern.ch//eos/root-eos/cms_opendata_2012_nanoaod/Run2012C_DoubleMuParked.root data/
!xrdcp root://eospublic.cern.ch//eos/root-eos/cms_opendata_2012_nanoaod/ZZTo4mu.root data/
# Look at ProcessorABC to see the expected methods and what they are supposed to do
class FancyDimuonProcessor(processor.ProcessorABC):
def __init__(self):
dataset_axis = hist.Cat("dataset", "Primary dataset")
mass_axis = hist.Bin("mass", r"$m_{\mu\mu}$ [GeV]", 600, 0.25, 300)
pt_axis = hist.Bin("pt", r"$p_{T,\mu}$ [GeV]", 3000, 0.25, 300)
self._accumulator = processor.dict_accumulator({
'mass': hist.Hist("Counts", dataset_axis, mass_axis),
'mass_near': hist.Hist("Counts", dataset_axis, mass_axis),
'mass_far': hist.Hist("Counts", dataset_axis, mass_axis),
'pt_lead': hist.Hist("Counts", dataset_axis, pt_axis),
'pt_trail': hist.Hist("Counts", dataset_axis, pt_axis),
'cutflow': processor.defaultdict_accumulator(int),
})
@property
def accumulator(self):
return self._accumulator
def process(self, df):
output = self.accumulator.identity()
dataset = df['dataset']
muons = JaggedCandidateArray.candidatesfromcounts(
df['nMuon'],
pt=df['Muon_pt'],
eta=df['Muon_eta'],
phi=df['Muon_phi'],
mass=df['Muon_mass'],
charge=df['Muon_charge'],
softId=df['Muon_softId'],
tightId=df['Muon_tightId']
)
output['cutflow']['all events'] += muons.size
soft_id = (muons.softId > 0)
muons = muons[soft_id]
output['cutflow']['soft id'] += soft_id.any().sum()
twomuons = (muons.counts >= 2)
output['cutflow']['two muons'] += twomuons.sum()
dimuons = muons[twomuons].distincts()
twodimuons = (dimuons.counts >= 2)
output['cutflow']['>= two dimuons'] += twodimuons.sum()
dimuons = dimuons[twodimuons]
opposite_charge = (dimuons.i0['charge'] * dimuons.i1['charge'] == -1)
dimuons = dimuons[opposite_charge]
output['cutflow']['opposite charge'] += opposite_charge.any().sum()
mass_20GeV = (dimuons.mass > 35)
dimuons = dimuons[mass_20GeV]
exactlytwodimuons = (dimuons.counts == 2)
output['cutflow']['== two dimuons'] += exactlytwodimuons.sum()
dimuons = dimuons[exactlytwodimuons].compact()
leading_mu = (dimuons.i0.pt.content > dimuons.i1.pt.content)
pt_lead = JaggedArray.fromoffsets(dimuons.offsets, np.where(leading_mu,
dimuons.i0.pt.content, dimuons.i1.pt.content))
pt_trail = JaggedArray.fromoffsets(dimuons.offsets, np.where(~leading_mu,
dimuons.i0.pt.content, dimuons.i1.pt.content))
near_z = np.abs(dimuons.mass - 91.118).argmin()
far_z = np.abs(dimuons.mass - 91.118).argmax()
output['mass'].fill(dataset=dataset,
mass=dimuons.p4.sum().mass)
output['mass_near'].fill(dataset=dataset,
mass=dimuons.mass[near_z].flatten())
output['mass_far'].fill(dataset=dataset,
mass=dimuons.mass[far_z].flatten())
output['pt_lead'].fill(dataset=dataset,
pt=pt_lead.flatten())
output['pt_trail'].fill(dataset=dataset,
pt=pt_trail.flatten())
return output
def postprocess(self, accumulator):
return accumulator
tstart = time.time()
fileset = {
'DoubleMuon': [
'data/Run2012B_DoubleMuParked.root',
'data/Run2012C_DoubleMuParked.root',
],
'ZZ to 4mu': [
'data/ZZTo4mu.root'
]
}
output = processor.run_uproot_job(fileset,
treename='Events',
processor_instance=FancyDimuonProcessor(),
executor=processor.futures_executor,
executor_args={'workers': 6, 'flatten': True},
chunksize=500000,
)
elapsed = time.time() - tstart
print(output)
fig, ax, _ = hist.plot1d(output['mass'], overlay='dataset')
ax.set_xlim(70,150)
ax.set_ylim(0, 3000)
fig, ax, _ = hist.plot1d(output['mass_near'], overlay='dataset')
#ax.set_xscale('log')
#ax.set_yscale('log')
ax.set_xlim(60,120)
ax.set_ylim(0.1, 7500)
fig, ax, _ = hist.plot1d(output['mass_far'], overlay='dataset')
#ax.set_xscale('log')
#ax.set_yscale('log')
ax.set_ylim(0.1, 8000)
fig, ax, _ = hist.plot1d(output['pt_lead'], overlay='dataset')
#ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(0.1, 5e3)
fig, ax, _ = hist.plot1d(output['pt_trail'], overlay='dataset')
#ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(0.1, 2e4)
print("Events/s:", output['cutflow']['all events']/elapsed)
```
| github_jupyter |
# Neural Networks
G. Richards (2016,2018), where I found this video series particularly helpful in trying to simplify the explanation https://www.youtube.com/watch?v=bxe2T-V8XRs. Thanks also to Vince Baker (Drexel).
[Artificial Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network) are a simplified computation architecture based loosely on the real neural networks found in brains.
In the image below the circles on the left represent the **attributes** of our input data, $X$, which here is 3 dimensional. The circles in the middle represent the neurons. They take in the information from the input and, based on some criterion decide whether or not to "fire". The collective results of the neurons in the hidden layer produce the output, $y$, which is represented by the circles on the right, which here is 2 dimensional result. The lines connecting the circles represent the synapses. This is a simple example with just one layer of neurons; however, there can be many layers of neurons.

In more detail:
The job of a synapses is to take input values and multiply them by some weight before passing them to the neuron (hidden layer):
$$z = \sum_i w x_i$$
The neuron then sums up the inputs from all of the synapses connected to it and applies an "activation function". For example a [sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function) activation function.
$$a = \frac{1}{1+e^{-z}}.$$

What the neural network does is to learn the weights of the synapses that are needed to produce an accurate model of $y_{\rm train}$.
Rather than think about the inputs individually, we can write this process in matrix form as
$$X W^{(1)} = Z^{(2)}.$$
If $D$ is the number of attributes (here 3) and $H$ is the number of neurons in the hidden layer (here 4), then $X$ is an $N\times D$ matrix, while $W^{(1)}$ is a $D\times H$ matrix. The result, $Z^{(2)}$, is then an $N\times H$ matrix.
We then apply the activation function to each entry of $Z^{(2)}$ independently:
$$A^{(2)} = f(Z^{(2)}),$$
where $A^{(2)}$ is the output of the neurons in the hidden layer and is also $N\times H$.
These values are then the inputs for the next set of synapses, where we multiply the inputs by another set of weights, $W^{(2)}:$
$$A^{(2)} W^{(2)} = Z^{(3)},$$
where $W^{(2)}$ is an $H\times O$ matrix and $Z^{(3)}$ is an $N\times O$ matrix with $O$-dimensional output.
Another activation function is then applied to $Z^{(3)}$ to give
$$\hat{y} = f(Z^{(3)}),$$
which is our estimator of $y$.
For example we might have $N=100$ people for which we have measured
* shoe size
* belt size
* hat size
for whom we know their height and weight.
Then we are going to use this to predict the height and weight for people where we only know shoe size, belt size, and hat size.
The neural network then essentially boils down to determining the weights, which are usually initialized randomly.
We do that by minimizing the cost function (which compares the true values of $y$ to our predicted values). Typically:
$$ {\rm Cost} = J = \sum\frac{1}{2}(y - \hat{y})^2.$$
If we just had 1 weight and we wanted to check 1000 possible values, that wouldn't be so bad. But we have 20 weights, which means checking $20^{1000}$ possible combinations. Remember the curse of dimensionality? That might take a while. Indeed, far, far longer than the age of the Universe.
How about just checking 3 points for each weight and see if we can at least figure out which way is "down hill"? That's a start.
We can rewrite $J$ as
$$ J = \sum\frac{1}{2}\left(y - f\left( f(X W^{(1)}) W^{(2)} \right) \right)^2$$
and then compute
$$\frac{\partial J}{\partial W}$$
in order to determine the slope of the cost function for each weight. This is the **gradient descent** method.
We'll want $\partial J/\partial W^{(1)}$ and $\partial J/\partial W^{(2)}$ separately. This allows us to [*backpropagate*](https://en.wikipedia.org/wiki/Backpropagation) the error contributions along each neuron and to change the weights where they most need to be changed. It is like each observations gets a vote on which way is "down hill". We compute the vector sum to decide the ultimate down hill direction.
Once we know the down hill direction from the derivative, we update the weights by subtracting a scalar times that derivative from the original weights. That's obviously much faster than randomly sampling all the possible combinations of weights. Once the weights are set, then you have your Neural Network classifier/regressor.

Scikit-Learn has both [unsupervised Neural Network](http://scikit-learn.org/stable/modules/neural_networks_unsupervised.html#neural-networks-unsupervised) and [supervised Neural Network](http://scikit-learn.org/stable/modules/neural_networks_supervised.html#neural-networks-supervised) examples. Apparently these are new as Jake VanderPlas didn't know about them.
Let's try to use the multi-layer perceptron classifier on the Boston House Price dataset (using 75% of the data for training and 25% for testing).
```
#Execute this cell after making the test set 25% of the total data set.
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
boston = load_boston()
#print boston.DESCR
X = boston.data
y = boston.target
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=___, random_state=42)
from sklearn.neural_network import MLPRegressor
clf = MLPRegressor(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,2), random_state=1)
clf.fit(Xtrain, ytrain)
# Look at the weights
print [coef.shape for coef in clf.coefs_]
ypred = clf.predict(Xtest)
#print ypred, ytest
fig = plt.figure(figsize=(6, 6))
plt.scatter(ytest,ypred)
plt.xlabel("Actual Value [x$1000]")
plt.ylabel("Predicted Value [x$1000]")
plt.show()
```
Of course, that only predicts the value for a fraction of the data set. Again, we can use Scikit-Learn's [cross_val_predict](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_predict.html#sklearn.model_selection.cross_val_predict) to make predictions for the full data set.
```
from sklearn.model_selection import cross_val_predict
yCVpred = cross_val_predict(clf, X, y, cv=10) # Complete
fig = plt.figure(figsize=(6, 6))
plt.scatter(y,yCVpred)
plt.xlabel("Actual Value [x$1000]")
plt.ylabel("Predicted Value [x$1000]")
plt.show()
```
Recent interest in neural networks surged in 2012 when a team using a deep convolutional neural network acheived record results classifying objects in the [ImageNet](http://image-net.org/) data set.
This is clearly much more sophisticated than our basic perceptron. "Deep" networks consist of tens of layers with thousands of neurons. These large networks have become usabel thanks to two breakthroughs: the use of sparse layers and the power of graphics processing units (GPUs).
Many image processing tasks involve convolving an image with a 2-dimensional kernel as shown below.

The sparse layers or convolutional layers in a deep network contain a large number of hidden nodes but very few synapses. The sparseness arises from the relatively small size of a typical convolution kernel (15x15 is a large kernel), so a hidden node representing one output of the convolution is connected to only a few input nodes. Compare this the our previous perceptron, in which every hidden node was connected to every input node.
Even though the total number of connections is greatly reduced in the sparse layers, the total number of nodes and connections in a modern deep network is still enormous. Luckily, training these networks turns out to be a great task for GPU acceleration! Serious work using neural networks is almost always done usign specialized GPU-accelerated platforms.
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import os
import glob
import random
# from crf import do_crf,post_process_crf
import imgaug
from imgaug import augmenters as iaa
from PIL import Image
from tqdm import tqdm
import matplotlib.pyplot as plt
import openslide
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, BatchNormalization, Conv2D, MaxPooling2D, AveragePooling2D, ZeroPadding2D, concatenate, Concatenate, UpSampling2D, Activation
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.applications.densenet import DenseNet121
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard
from tensorflow.keras import metrics
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms # noqa
import sklearn.metrics
import io
import itertools
from six.moves import range
import time
import cv2
from skimage.color import rgb2hsv
from skimage.filters import threshold_otsu
import sys
sys.path.append(os.path.dirname(os.path.abspath(os.getcwd())))
from models.seg_models import get_inception_resnet_v2_unet_softmax, unet_densenet121
from models.deeplabv3p_original import Deeplabv3
# Random Seeds
np.random.seed(0)
random.seed(0)
tf.set_random_seed(0)
import gc
import pandas as pd
import tifffile
import skimage.io as io
# In[50]:
# Image Helper Functions
def imsave(*args, **kwargs):
"""
Concatenate the images given in args and saves them as a single image in the specified output destination.
Images should be numpy arrays and have same dimensions along the 0 axis.
imsave(im1,im2,out="sample.png")
"""
args_list = list(args)
for i in range(len(args_list)):
if type(args_list[i]) != np.ndarray:
print("Not a numpy array")
return 0
if len(args_list[i].shape) == 2:
args_list[i] = np.dstack([args_list[i]]*3)
if args_list[i].max() == 1:
args_list[i] = args_list[i]*255
out_destination = kwargs.get("out",'')
try:
concatenated_arr = np.concatenate(args_list,axis=1)
im = Image.fromarray(np.uint8(concatenated_arr))
except Exception as e:
print(e)
import ipdb; ipdb.set_trace()
return 0
if out_destination:
print("Saving to %s"%(out_destination))
im.save(out_destination)
else:
return im
def imshow(*args,**kwargs):
""" Handy function to show multiple plots in on row, possibly with different cmaps and titles
Usage:
imshow(img1, title="myPlot")
imshow(img1,img2, title=['title1','title2'])
imshow(img1,img2, cmap='hot')
imshow(img1,img2,cmap=['gray','Blues']) """
plt.figure(figsize=(1024,2048))
cmap = kwargs.get('cmap', 'gray')
title= kwargs.get('title','')
axis_off = kwargs.get('axis_off','')
if len(args)==0:
raise ValueError("No images given to imshow")
elif len(args)==1:
plt.title(title)
plt.imshow(args[0], interpolation='none')
else:
n=len(args)
if type(cmap)==str:
cmap = [cmap]*n
if type(title)==str:
title= [title]*n
plt.figure(figsize=(n*5,10))
for i in range(n):
plt.subplot(1,n,i+1)
plt.title(title[i])
plt.imshow(args[i], cmap[i])
if axis_off:
plt.axis('off')
plt.show()
def normalize_minmax(data):
"""
Normalize contrast across volume
"""
_min = np.float(np.min(data))
_max = np.float(np.max(data))
if (_max-_min)!=0:
img = (data - _min) / (_max-_min)
else:
img = np.zeros_like(data)
return img
# Functions
def BinMorphoProcessMask(mask):
"""
Binary operation performed on tissue mask
"""
close_kernel = np.ones((20, 20), dtype=np.uint8)
image_close = cv2.morphologyEx(np.array(mask), cv2.MORPH_CLOSE, close_kernel)
open_kernel = np.ones((5, 5), dtype=np.uint8)
image_open = cv2.morphologyEx(np.array(image_close), cv2.MORPH_OPEN, open_kernel)
kernel = np.ones((20, 20), dtype=np.uint8)
image = cv2.dilate(image_open,kernel,iterations = 1)
return image
def get_bbox(cont_img, rgb_image=None):
temp_img = np.uint8(cont_img.copy())
_,contours, _ = cv2.findContours(temp_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rgb_contour = None
if rgb_image is not None:
rgb_contour = rgb_image.copy()
line_color = (0, 0, 255) # blue color code
cv2.drawContours(rgb_contour, contours, -1, line_color, 2)
bounding_boxes = [cv2.boundingRect(c) for c in contours]
for x, y, h, w in bounding_boxes:
rgb_contour = cv2.rectangle(rgb_contour,(x,y),(x+h,y+w),(0,255,0),2)
return bounding_boxes, rgb_contour
def get_all_bbox_masks(mask, stride_factor):
"""
Find the bbox and corresponding masks
"""
bbox_mask = np.zeros_like(mask)
bounding_boxes, _ = get_bbox(mask)
y_size, x_size = bbox_mask.shape
for x, y, h, w in bounding_boxes:
x_min = x - stride_factor
x_max = x + h + stride_factor
y_min = y - stride_factor
y_max = y + w + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_max > y_size:
y_max = y_size - 1
bbox_mask[y_min:y_max, x_min:x_max]=1
return bbox_mask
def get_all_bbox_masks_with_stride(mask, stride_factor):
"""
Find the bbox and corresponding masks
"""
bbox_mask = np.zeros_like(mask)
bounding_boxes, _ = get_bbox(mask)
y_size, x_size = bbox_mask.shape
for x, y, h, w in bounding_boxes:
x_min = x - stride_factor
x_max = x + h + stride_factor
y_min = y - stride_factor
y_max = y + w + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_max > y_size:
y_max = y_size - 1
bbox_mask[y_min:y_max:stride_factor, x_min:x_max:stride_factor]=1
return bbox_mask
def find_largest_bbox(mask, stride_factor):
"""
Find the largest bounding box encompassing all the blobs
"""
y_size, x_size = mask.shape
x, y = np.where(mask==1)
bbox_mask = np.zeros_like(mask)
x_min = np.min(x) - stride_factor
x_max = np.max(x) + stride_factor
y_min = np.min(y) - stride_factor
y_max = np.max(y) + stride_factor
if x_min < 0:
x_min = 0
if y_min < 0:
y_min = 0
if x_max > x_size:
x_max = x_size - 1
if y_min > y_size:
y_max = y_size - 1
bbox_mask[x_min:x_max, y_min:y_max]=1
return bbox_mask
def TissueMaskGeneration(slide_obj, level, RGB_min=50):
img_RGB = slide_obj.read_region((0, 0),level,slide_obj.level_dimensions[level])
img_RGB = np.transpose(np.array(img_RGB.convert('RGB')),axes=[1,0,2])
img_HSV = rgb2hsv(img_RGB)
background_R = img_RGB[:, :, 0] > threshold_otsu(img_RGB[:, :, 0])
background_G = img_RGB[:, :, 1] > threshold_otsu(img_RGB[:, :, 1])
background_B = img_RGB[:, :, 2] > threshold_otsu(img_RGB[:, :, 2])
tissue_RGB = np.logical_not(background_R & background_G & background_B)
tissue_S = img_HSV[:, :, 1] > threshold_otsu(img_HSV[:, :, 1])
min_R = img_RGB[:, :, 0] > RGB_min
min_G = img_RGB[:, :, 1] > RGB_min
min_B = img_RGB[:, :, 2] > RGB_min
tissue_mask = tissue_S & tissue_RGB & min_R & min_G & min_B
# r = img_RGB[:,:,0] < 235
# g = img_RGB[:,:,1] < 210
# b = img_RGB[:,:,2] < 235
# tissue_mask = np.logical_or(r,np.logical_or(g,b))
return tissue_mask
def TissueMaskGenerationPatch(patchRGB):
'''
Returns mask of tissue that obeys the threshold set by paip
'''
r = patchRGB[:,:,0] < 235
g = patchRGB[:,:,1] < 210
b = patchRGB[:,:,2] < 235
tissue_mask = np.logical_or(r,np.logical_or(g,b))
return tissue_mask
def TissueMaskGeneration_BIN(slide_obj, level):
img_RGB = np.transpose(np.array(slide_obj.read_region((0, 0),
level,
slide_obj.level_dimensions[level]).convert('RGB')),
axes=[1, 0, 2])
img_HSV = cv2.cvtColor(img_RGB, cv2.COLOR_BGR2HSV)
img_S = img_HSV[:, :, 1]
_,tissue_mask = cv2.threshold(img_S, 0, 255, cv2.THRESH_BINARY)
return np.array(tissue_mask)
def TissueMaskGeneration_BIN_OTSU(slide_obj, level):
img_RGB = np.transpose(np.array(slide_obj.read_region((0, 0),
level,
slide_obj.level_dimensions[level]).convert('RGB')),
axes=[1, 0, 2])
img_HSV = cv2.cvtColor(img_RGB, cv2.COLOR_BGR2HSV)
img_S = img_HSV[:, :, 1]
_,tissue_mask = cv2.threshold(img_S, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return np.array(tissue_mask)
def labelthreshold(image, threshold=0.5):
np.place(image,image>=threshold, 1)
np.place(image,image<threshold, 0)
return np.uint8(image)
def calc_jacc_score(x,y,smoothing=1):
for var in [x,y]:
np.place(var,var==255,1)
numerator = np.sum(x*y)
denominator = np.sum(np.logical_or(x,y))
return (numerator+smoothing)/(denominator+smoothing)
# In[41]:
# DataLoader Implementation
class WSIStridedPatchDataset(Dataset):
"""
Data producer that generate all the square grids, e.g. 3x3, of patches,
from a WSI and its tissue mask, and their corresponding indices with
respect to the tissue mask
"""
def __init__(self, wsi_path, mask_path, label_path=None, image_size=256,
normalize=True, flip='NONE', rotate='NONE',
level=5, sampling_stride=16, roi_masking=True):
"""
Initialize the data producer.
Arguments:
wsi_path: string, path to WSI file
mask_path: string, path to mask file in numpy format OR None
label_mask_path: string, path to ground-truth label mask path in tif file or
None (incase of Normal WSI or test-time)
image_size: int, size of the image before splitting into grid, e.g. 768
patch_size: int, size of the patch, e.g. 256
crop_size: int, size of the final crop that is feed into a CNN,
e.g. 224 for ResNet
normalize: bool, if normalize the [0, 255] pixel values to [-1, 1],
mostly False for debuging purpose
flip: string, 'NONE' or 'FLIP_LEFT_RIGHT' indicating the flip type
rotate: string, 'NONE' or 'ROTATE_90' or 'ROTATE_180' or
'ROTATE_270', indicating the rotate type
level: Level to extract the WSI tissue mask
roi_masking: True: Multiplies the strided WSI with tissue mask to eliminate white spaces,
False: Ensures inference is done on the entire WSI
sampling_stride: Number of pixels to skip in the tissue mask, basically it's the overlap
fraction when patches are extracted from WSI during inference.
stride=1 -> consecutive pixels are utilized
stride= image_size/pow(2, level) -> non-overalaping patches
"""
self._wsi_path = wsi_path
self._mask_path = mask_path
self._label_path = label_path
self._image_size = image_size
self._normalize = normalize
self._flip = flip
self._rotate = rotate
self._level = level
self._sampling_stride = sampling_stride
self._roi_masking = roi_masking
self._preprocess()
def _preprocess(self):
self._slide = openslide.OpenSlide(self._wsi_path)
if self._label_path is not None:
self._label_slide = openslide.OpenSlide(self._label_path)
X_slide, Y_slide = self._slide.level_dimensions[0]
print("Image dimensions: (%d,%d)" %(X_slide,Y_slide))
factor = self._sampling_stride
if self._mask_path is not None:
mask_file_name = os.path.basename(self._mask_path)
if mask_file_name.endswith('.tiff'):
mask_obj = openslide.OpenSlide(self._mask_path)
self._mask = np.array(mask_obj.read_region((0, 0),
self._level,
mask_obj.level_dimensions[self._level]).convert('L')).T
np.place(self._mask,self._mask>0,255)
else:
# Generate tissue mask on the fly
self._mask = TissueMaskGeneration(self._slide, self._level)
# morphological operations ensure the holes are filled in tissue mask
# and minor points are aggregated to form a larger chunk
self._mask = BinMorphoProcessMask(np.uint8(self._mask))
# self._all_bbox_mask = get_all_bbox_masks(self._mask, factor)
# self._largest_bbox_mask = find_largest_bbox(self._mask, factor)
# self._all_strided_bbox_mask = get_all_bbox_masks_with_stride(self._mask, factor)
X_mask, Y_mask = self._mask.shape
# print (self._mask.shape, np.where(self._mask>0))
# imshow(self._mask.T)
# cm17 dataset had issues with images being power's of 2 precisely
# if X_slide != X_mask or Y_slide != Y_mask:
print('Mask (%d,%d) and Slide(%d,%d) '%(X_mask,Y_mask,X_slide,Y_slide))
if X_slide // X_mask != Y_slide // Y_mask:
raise Exception('Slide/Mask dimension does not match ,'
' X_slide / X_mask : {} / {},'
' Y_slide / Y_mask : {} / {}'
.format(X_slide, X_mask, Y_slide, Y_mask))
self._resolution = np.round(X_slide * 1.0 / X_mask)
if not np.log2(self._resolution).is_integer():
raise Exception('Resolution (X_slide / X_mask) is not power of 2 :'
' {}'.format(self._resolution))
# all the idces for tissue region from the tissue mask
self._strided_mask = np.ones_like(self._mask)
ones_mask = np.zeros_like(self._mask)
ones_mask[::factor, ::factor] = self._strided_mask[::factor, ::factor]
if self._roi_masking:
self._strided_mask = ones_mask*self._mask
# self._strided_mask = ones_mask*self._largest_bbox_mask
# self._strided_mask = ones_mask*self._all_bbox_mask
# self._strided_mask = self._all_strided_bbox_mask
else:
self._strided_mask = ones_mask
# print (np.count_nonzero(self._strided_mask), np.count_nonzero(self._mask[::factor, ::factor]))
# imshow(self._strided_mask.T, self._mask[::factor, ::factor].T)
# imshow(self._mask.T, self._strided_mask.T)
self._X_idcs, self._Y_idcs = np.where(self._strided_mask)
self._idcs_num = len(self._X_idcs)
def __len__(self):
return self._idcs_num
def save_scaled_imgs(self):
scld_dms = self._slide.level_dimensions[2]
self._slide_scaled = self._slide.read_region((0,0),2,scld_dms)
if self._label_path is not None:
self._label_scaled = np.array(self._label_slide.read_region((0,0),4,scld_dms).convert('L'))
np.place(self._label_scaled,self._label_scaled>0,255)
def save_get_mask(self, save_path):
np.save(save_path, self._mask)
def get_mask(self):
return self._mask
def get_strided_mask(self):
return self._strided_mask
def __getitem__(self, idx):
x_coord, y_coord = self._X_idcs[idx], self._Y_idcs[idx]
x_max_dim,y_max_dim = self._slide.level_dimensions[0]
# x = int(x_coord * self._resolution)
# y = int(y_coord * self._resolution)
x = int(x_coord * self._resolution - self._image_size//2)
y = int(y_coord * self._resolution - self._image_size//2)
# x = int(x_coord * self._resolution)
# y = int(y_coord * self._resolution)
#If Image goes out of bounds
if x>(x_max_dim - image_size):
x = x_max_dim - image_size
elif x<0:
x = 0
if y>(y_max_dim - image_size):
y = y_max_dim - image_size
elif y<0:
y = 0
#Converting pil image to np array transposes the w and h
img = np.transpose(self._slide.read_region(
(x, y), 0, (self._image_size, self._image_size)).convert('RGB'),[1,0,2])
if self._label_path is not None:
label_img = self._label_slide.read_region(
(x, y), 0, (self._image_size, self._image_size)).convert('L')
else:
#print('No label img')
label_img = Image.fromarray(np.zeros((self._image_size, self._image_size), dtype=np.uint8))
if self._flip == 'FLIP_LEFT_RIGHT':
img = img.transpose(Image.FLIP_LEFT_RIGHT)
label_img = label_img.transpose(Image.FLIP_LEFT_RIGHT)
if self._rotate == 'ROTATE_90':
img = img.transpose(Image.ROTATE_90)
label_img = label_img.transpose(Image.ROTATE_90)
if self._rotate == 'ROTATE_180':
img = img.transpose(Image.ROTATE_180)
label_img = label_img.transpose(Image.ROTATE_180)
if self._rotate == 'ROTATE_270':
img = img.transpose(Image.ROTATE_270)
label_img = label_img.transpose(Image.ROTATE_270)
# PIL image: H x W x C
img = np.array(img, dtype=np.float32)
label_img = np.array(label_img, dtype=np.uint8)
np.place(label_img, label_img>0, 255)
if self._normalize:
img = (img - 128.0)/128.0
return (img, x, y, label_img)
#CONFIG
# batch_size = 40
# image_size = 256
# sampling_stride = 128
image_size = 1024
sampling_stride = 512
kfold_k = 5
fold = 'all'
out_dir_root = '../../results/saved_imgs'
#Model loading
os.environ["CUDA_VISIBLE_DEVICES"] = '2'
core_config = tf.ConfigProto()
core_config.gpu_options.allow_growth = False
# core_config.gpu_options.per_process_gpu_memory_fraction=0.47
session =tf.Session(config=core_config)
K.set_session(session)
#Inception
def load_incep_resnet(model_path):
model = get_inception_resnet_v2_unet_softmax((None, None), weights=None)
model.load_weights(model_path)
print ("Loaded Model Weights %s" % model_path)
return model
def load_unet_densenet(model_path):
model = unet_densenet121((None, None), weights=None)
model.load_weights(model_path)
print ("Loaded Model Weights %s" % model_path)
return model
def load_deeplabv3(model_path, OS):
model = Deeplabv3(input_shape=(image_size, image_size, 3),weights=None,classes=2,activation='softmax',backbone='xception',OS=OS)
model.load_weights(model_path)
print ("Loaded Model Weights %s" % model_path)
return model
# model_path = glob.glob('../../results/saved_models/incep_viable_200k/5fold_0/sel-model.10*')[0]
model_dict = {
'train_14': load_incep_resnet('../../results/saved_models/incep_imagenet_200k/5fold_0/sel-model.11-0.16.h5'),
# 'train_12': load_incep_resnet('../../results/saved_models/incep_imagenet_200k/5fold_0/sel-model.12-0.17.h5'),
'train_15': load_unet_densenet('../../results/saved_models/dense_viable_200k/5fold_3/model.28-0.08.h5'),
'train_16': load_deeplabv3('../../results/saved_models/deeplab_viable_200k/5fold_4/sel-model.20-0.13.h5',OS=16),
}
model_keys = list(model_dict.keys())
ensemble_key = 'train_17'
model_dict[ensemble_key] = 'ensemble'
#Stitcher
sampling_stride = 1024
batch_size = 1
start_time = time.time()
mode = 'validation'
if fold == 'all':
sample_ids = os.listdir('../../data/raw-data/train')
sample_ids.sort()
else:
sample_ids = [ x.split('/')[-2] for x in list(pd.read_csv('../../data/raw-data/cross_val_splits_%d_whole/%s_fold_%d.csv'%(kfold_k,mode,fold))['Image_Path'])]
sample_ids.remove('Training_phase_1_016')
print(sample_ids)
print("Total %d" %len(sample_ids))
# sample_ids = sample_ids[1:]
# wsi_paths = glob.glob('../../data/raw-data/valid/*svs')
# print("Total %d" %len(wsi_paths))
# for i,wsi_path in enumerate(wsi_paths[1:2]):
total_jacc_score_dict = {}
for key in model_dict.keys():
total_jacc_score_dict[key] = 0
for i,sample_id in enumerate(sample_ids):
sample_dir = os.path.join('..','..','data','raw-data','train',sample_id)
wsi_path = glob.glob(os.path.join(sample_dir,'*.svs'))[0]
label_path = glob.glob(os.path.join(sample_dir,'*viable*.tiff'))[0]
label_img = io.imread(glob.glob(os.path.join(sample_dir,'*viable*.tif'))[0])
print(i+1,'/', len(sample_ids),sample_id)
wsi_obj = openslide.OpenSlide(wsi_path)
x_max_dim,y_max_dim = wsi_obj.level_dimensions[0]
scld_dms = wsi_obj.level_dimensions[2]
scale = lambda x: cv2.resize(x,tuple(reversed(scld_dms))).T
mask_path = label_path
start_time = time.time()
dataset_obj = WSIStridedPatchDataset(wsi_path,
mask_path,
label_path,
image_size=image_size,
normalize=True,
flip=None, rotate=None,
level=2, sampling_stride=sampling_stride//16, roi_masking=True)
dataloader = DataLoader(dataset_obj, batch_size=batch_size, num_workers=8, drop_last=True)
dataset_obj.save_scaled_imgs()
score_a = 0
score_b = 0
print("Total iterations: %d %d" % (dataloader.__len__(), dataloader.dataset.__len__()))
for i,(data, xes, ys, label) in enumerate(dataloader):
tmp_pls= lambda x: x + image_size
tmp_mns= lambda x: x
image_patches = data.cpu().data.numpy()
image_patches = data.cpu().data.numpy()
pred_map_dict = {}
pred_map_dict[ensemble_key] = 0
for key in model_keys:
pred_map_dict[key] = model_dict[key].predict(image_patches,verbose=0,batch_size=batch_size)
pred_map_dict[ensemble_key]+=pred_map_dict[key]
pred_map_dict[ensemble_key]/=len(model_keys)
actual_batch_size = image_patches.shape[0]
for j in range(actual_batch_size):
x = int(xes[j])
y = int(ys[j])
wsi_img = np.uint8(image_patches[j]*128+128)
patch_mask = TissueMaskGenerationPatch(wsi_img)
# prediction = red_map[j,:,:,:]
# prediction = post_process_crf(wsi_img,prediction,2)
keys = ['train_16','train_17']
for key in keys:
prediction = pred_map_dict[key][j,:,:,1]
prediction_pm=prediction*patch_mask
close_kernel = np.ones((20, 20), dtype=np.uint8)
prediction_fill = cv2.morphologyEx(np.array(prediction), cv2.MORPH_CLOSE, close_kernel)
prediction_fill_pm = prediction_fill*patch_mask
print(key)
prediction = labelthreshold(prediction,0.5)
prediction_pm = labelthreshold(prediction_pm,0.5)
prediction_fill = labelthreshold(prediction_fill,0.5)
prediction_fill_pm = labelthreshold(prediction_fill_pm,0.5)
label_patch = label_img.T[x:x+1024,y:y+1024]
print('normal',calc_jacc_score(prediction,label_patch))
print('pm',calc_jacc_score(prediction_pm,label_patch))
print('prediction_fill',calc_jacc_score(prediction_fill,label_patch))
print('prediction_fill_pm',calc_jacc_score(prediction_fill_pm,label_patch))
if calc_jacc_score(prediction_pm,label_patch) > calc_jacc_score(prediction_fill_pm,label_patch):
score_a+=1
else:
score_b+=1
print(score_a,score_b)
# imshow(wsi_img,prediction,prediction_pm,prediction_fill_pm,prediction_fill_pm,label_patch,axis_off=True)
imshow(wsi_img,prediction,patch_mask,prediction_pm,label_patch,axis_off=True)
# images = [wsi_img,prediction,patch_mask,prediction_pm,label_patch]
# for disp_img in images:
# # import ipdipdbipdb.set_trace()
# imshow(disp_img,axis_off=True)
if i > 5:
break
#Stitcher
sampling_stride = 1024
batch_size = 1
start_time = time.time()
mode = 'validation'
if fold == 'all':
sample_ids = os.listdir('../../data/raw-data/train')
sample_ids.sort()
else:
sample_ids = [ x.split('/')[-2] for x in list(pd.read_csv('../../data/raw-data/cross_val_splits_%d_whole/%s_fold_%d.csv'%(kfold_k,mode,fold))['Image_Path'])]
sample_ids.remove('Training_phase_1_016')
print(sample_ids)
print("Total %d" %len(sample_ids))
# sample_ids = sample_ids[1:]
# wsi_paths = glob.glob('../../data/raw-data/valid/*svs')
# print("Total %d" %len(wsi_paths))
# for i,wsi_path in enumerate(wsi_paths[1:2]):
total_jacc_score_dict = {}
for key in model_dict.keys():
total_jacc_score_dict[key] = 0
for i,sample_id in enumerate(sample_ids):
sample_dir = os.path.join('..','..','data','raw-data','train',sample_id)
wsi_path = glob.glob(os.path.join(sample_dir,'*.svs'))[0]
label_path = glob.glob(os.path.join(sample_dir,'*viable*.tiff'))[0]
label_img = io.imread(glob.glob(os.path.join(sample_dir,'*viable*.tif'))[0])
print(i+1,'/', len(sample_ids),sample_id)
wsi_obj = openslide.OpenSlide(wsi_path)
x_max_dim,y_max_dim = wsi_obj.level_dimensions[0]
scld_dms = wsi_obj.level_dimensions[2]
scale = lambda x: cv2.resize(x,tuple(reversed(scld_dms))).T
mask_path = label_path
start_time = time.time()
dataset_obj = WSIStridedPatchDataset(wsi_path,
mask_path,
label_path,
image_size=image_size,
normalize=True,
flip=None, rotate=None,
level=2, sampling_stride=sampling_stride//16, roi_masking=True)
dataloader = DataLoader(dataset_obj, batch_size=batch_size, num_workers=8, drop_last=True)
dataset_obj.save_scaled_imgs()
score_a = 0
score_b = 0
print("Total iterations: %d %d" % (dataloader.__len__(), dataloader.dataset.__len__()))
for i,(data, xes, ys, label) in enumerate(dataloader):
tmp_pls= lambda x: x + image_size
tmp_mns= lambda x: x
image_patches = data.cpu().data.numpy()
image_patches = data.cpu().data.numpy()
pred_map_dict = {}
pred_map_dict[ensemble_key] = 0
for key in model_keys:
pred_map_dict[key] = model_dict[key].predict(image_patches,verbose=0,batch_size=batch_size)
pred_map_dict[ensemble_key]+=pred_map_dict[key]
pred_map_dict[ensemble_key]/=len(model_keys)
actual_batch_size = image_patches.shape[0]
for j in range(actual_batch_size):
x = int(xes[j])
y = int(ys[j])
wsi_img = np.uint8(image_patches[j]*128+128)
patch_mask = TissueMaskGenerationPatch(wsi_img)
# prediction = red_map[j,:,:,:]
# prediction = post_process_crf(wsi_img,prediction,2)
keys = ['train_16','train_17']
for key in keys:
prediction = pred_map_dict[key][j,:,:,1]
prediction = labelthreshold(prediction,0.5)
prediction_pm=prediction*patch_mask
close_kernel = np.ones((20, 20), dtype=np.uint8)
prediction_fill = cv2.morphologyEx(np.array(prediction), cv2.MORPH_CLOSE, close_kernel)
prediction_fill_pm = prediction_fill*patch_mask
print(key)
label_patch = label_img.T[x:x+1024,y:y+1024]
print('normal',calc_jacc_score(prediction,label_patch))
print('pm',calc_jacc_score(prediction_pm,label_patch))
print('prediction_fill',calc_jacc_score(prediction_fill,label_patch))
print('prediction_fill_pm',calc_jacc_score(prediction_fill_pm,label_patch))
if calc_jacc_score(prediction_pm,label_patch) > calc_jacc_score(prediction_fill_pm,label_patch):
score_a+=1
else:
score_b+=1
print(score_a,score_b)
imshow(prediction,prediction_pm,prediction_fill,prediction_fill_pm,label_p)
if i > 20:
break
wsi_obj.read_region((0,0),2,wsi_obj.level_dimensions[2])
Image.fromarray(dataset_obj._label_scaled)
nb_components, output, stats, centroid = cv2.connectedComponentsWithStats(dataset_obj._label_scaled, connectivity=8)
sizes = stats[1:, -1]; nb_components = nb_components - 1
# minimum size of particles we want to keep (number of pixels)
#here, it's a fixed value, but you can set it as you want, eg the mean of the sizes or whatever
min_size = np.mean(sizes)/2
#your answer image
tissue_mask_S_cc = np.zeros((output.shape))
#for every component in the image, you keep it only if it's above min_size
for i in range(0, nb_components):
if sizes[i] >= min_size:
tissue_mask_S_cc[output == i + 1] = 1
# tissue_mask_S_ccd = cv2.dilate(np.uint8(tissue_mask_S_cc), morpho_kernel, iterations = 5)
stats.shape
Image.fromarray(tissue_mask_S_cc.astype('uint8')*255)
np.unique(output*256)
Image.fromarray(output.astype('uint8')*256)
```
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
```
## Internal stability example 2
### How to use this notebook?
Try to change the dynamic matrix $A$ of the stable linear system below in order to obtain an unstable system and change the initial conditions in order to show the divergent mode(s).
$$
\dot{x} = \underbrace{\begin{bmatrix}0&1\\-0.1&-2\end{bmatrix}}_{A}x
$$
Try to answer:
- How can a suitable initial state be built?
```
%matplotlib inline
import control as control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
# Preparatory cell
A = numpy.matrix([[0.,1.],[-1.0/10.0,-2.0]])
X0 = numpy.matrix([[0.0],[0.0]])
Aw = matrixWidget(2,2)
Aw.setM(A)
X0w = matrixWidget(2,1)
X0w.setM(X0)
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Main cell
def main_callback(A, X0, DW):
sols = numpy.linalg.eig(A)
sys = sss(A,[[0],[1]],[1,0],0)
pole = control.pole(sys)
if numpy.real(pole[0]) != 0:
p1r = abs(numpy.real(pole[0]))
else:
p1r = 1
if numpy.real(pole[1]) != 0:
p2r = abs(numpy.real(pole[1]))
else:
p2r = 1
if numpy.imag(pole[0]) != 0:
p1i = abs(numpy.imag(pole[0]))
else:
p1i = 1
if numpy.imag(pole[1]) != 0:
p2i = abs(numpy.imag(pole[1]))
else:
p2i = 1
print('A\'s eigenvalues are:',round(sols[0][0],4),'and',round(sols[0][1],4))
#T = numpy.linspace(0, 60, 1000)
T, yout, xout = control.initial_response(sys,X0=X0,return_x=True)
fig = plt.figure("Free response", figsize=(16,5))
ax = fig.add_subplot(121)
plt.plot(T,xout[0])
plt.grid()
ax.set_xlabel('time [s]')
ax.set_ylabel(r'$x_1$')
ax1 = fig.add_subplot(122)
plt.plot(T,xout[1])
plt.grid()
ax1.set_xlabel('time [s]')
ax1.set_ylabel(r'$x_2$')
alltogether = widgets.VBox([widgets.HBox([widgets.Label('$A$:',border=3),
Aw,
widgets.Label(' ',border=3),
widgets.Label('$X_0$:',border=3),
X0w,
START])])
out = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW})
out.layout.height = '400px'
display(out, alltogether)
#create dummy widget 2
DW2 = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
DW2.value = -1
#create button widget
START2 = widgets.Button(
description='Show answers',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click for view the answers',
icon='check'
)
def on_start_button_clicked2(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW2.value> 0 :
DW2.value = -1
else:
DW2.value = 1
pass
START2.on_click(on_start_button_clicked2)
def main_callback2(DW2):
if DW2 > 0:
display(Markdown(r'''>Answer: The initial state can be built as a linear combination of the eigenvectors associated to the unstable poles.
$$ $$
Example:
$$
A = \begin{bmatrix} -1 & 1 \\ 0 & 3 \end{bmatrix}, \quad x_0 = \begin{bmatrix} \frac{1}{4} \\ 1 \end{bmatrix} \text{where $x_0$ is the eigenvector associated to the unstable pole $3$ .}
$$'''))
else:
display(Markdown(''))
#create a graphic structure to hold all widgets
alltogether2 = widgets.VBox([START2])
out2 = widgets.interactive_output(main_callback2,{'DW2':DW2})
#out.layout.height = '300px'
display(out2,alltogether2)
```
| github_jupyter |
# Collaborative Filtering Algorithm
## Movie Recommemder System using Collaborative Filtering Algorithm
### This is an implementation of Collaborative Filtering Algorithm from scratch, based on the lecture of Andrew NG on the corresponding topic in Coursera.
### Dataset source: https://www.kaggle.com/grouplens/movielens-20m-dataset
```
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import os
from textwrap import wrap
# Set default fontsize and colors for graphs
SMALL_SIZE, MEDIUM_SIZE, BIG_SIZE = 8, 12, 16
plt.rc('font', size=MEDIUM_SIZE)
plt.rc('axes', titlesize=BIG_SIZE)
plt.rc('axes', labelsize=MEDIUM_SIZE)
plt.rc('xtick', labelsize=MEDIUM_SIZE)
plt.rc('ytick', labelsize=MEDIUM_SIZE)
plt.rc('legend', fontsize=SMALL_SIZE)
plt.rc('figure', titlesize=BIG_SIZE)
my_colors = 'rgbkymc'
# Disable scrolling for long output
from IPython.display import display, Javascript
disable_js = """
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
"""
display(Javascript(disable_js))
```
## (1) Read and Prepare Data
### Read "movie" and "rating" dataset
```
# Read the input training data
input_data_file_movie = "C:\Study\DataSets\movielens-20m-dataset\\movie.csv"
input_data_file_rating = "C:\Study\DataSets\movielens-20m-dataset\\rating.csv"
movie_data_all = pd.read_csv(input_data_file_movie)
rating_data_all = pd.read_csv(input_data_file_rating)
movie_data_all.head(5)
rating_data_all.head(5)
print("Total number of movies =", movie_data_all.shape[0])
print("Total number of unique movies =", len(movie_data_all.movieId.unique()))
print("")
print("Total number of user ratings =", rating_data_all.shape[0])
print("Total number of unique users =", len(rating_data_all.userId.unique()))
# Keep only required columns
movie_data_all = movie_data_all.drop(['genres'], axis=1)
rating_data_all = rating_data_all.drop(['timestamp'], axis=1)
# Test with a subset of data
# Fetch num_movies number of movies
#num_movies = 10
#movie_data = movie_data_all.iloc[:num_movies, :]
# Fetch only ratings corresponding to above movies
#rating_data = rating_data_all[rating_data_all.movieId.isin(movie_data.movieId)]
#print("Total number of movies to be used to training =", movie_data.shape[0])
#print("Total number of user ratings to be used for training =", rating_data.shape[0])
# Test with a subset of data
#num_movies = 500
#num_ratings = 10000
#movie_data = movie_data.iloc[:num_movies, :]
#rating_data = rating_data.iloc[:num_ratings, :]
```
### Select few most popular movies, from two distinct genres. In this particular example, we considered movies of genres "Action" and "Romance".
### The objective is to find if collborative filtering algorithm can successfully learn the features of these movies based on user ratings, such that we can clearly distinguish their genres and recommend accordingly.
```
# Pick top movies
top_action_movies = ['Dark Knight, The', 'Lord of the Rings: The Return of the King',
'Inception', 'Star Wars: Episode V - The Empire Strikes Back',
'Matrix, The']
top_romantic_movies = ['Notting Hill', 'Love Story \(1970\)', 'When Harry Met Sally',
'Titanic \(1997\)', 'Pretty Woman']
top_movies = top_action_movies + top_romantic_movies
movie_data = movie_data_all[movie_data_all.title.str.contains('|'.join(top_movies))]
movie_data
# Pick all ratings
#num_ratings = 2000000
rating_data = rating_data_all.iloc[:, :]
```
### Merge movie and rating dataset based on movieId column
```
movie_rating_merged_data = movie_data.merge(rating_data, on='movieId', how='inner')
movie_rating_merged_data.head()
# Mean rating of a movie
movie_rating_merged_data[movie_rating_merged_data.title == 'Pretty Woman (1990)']['rating'].mean()
# Top 10 movies by mean rating
movie_rating_merged_data.groupby(['title'], sort=False)['rating'].mean().sort_values(ascending=False).head(10)
```
## (2) Build Collaborative Filtering Model
### Create a pivot table of movies (on rows) and corresponsing user ratings (on columns). The pivot table will contain the ratings of only selected movies.
### Thus, rows = movies and columns = users
```
movie_rating_merged_pivot = pd.pivot_table(movie_rating_merged_data,
index=['title'],
columns=['userId'],
values=['rating'],
dropna=False,
fill_value=0
)
movie_rating_merged_pivot.shape
Y = movie_rating_merged_pivot
```
### Create a matrix R, such that, R(i,j) = 1 iff User j has selected a rating for Movie i. R(i,j) = 0 otherwise.
```
R = np.ones(Y.shape)
no_rating_idx = np.where(Y == 0.0)
R[no_rating_idx] = 0
R
```
### Assign n_m (number of movies), n_u (number of users) and n_f (number of features)
```
n_u = Y.shape[1]
n_m = Y.shape[0]
n_f = 2
```
### Assign random initial values to movie and user parameters.
### X = parameters of movies (each row represent a movie)
### Theta = parameters of users (each row represent a user)
```
Initial_X = np.random.rand(n_m, n_f)
Initial_Theta = np.random.rand(n_u, n_f)
#print("Initial_X =", Initial_X)
#print("Initial_Theta =", Initial_Theta)
```
### Cost function or Objective function of collborative filtering algorithm
```
# Cost Function
def collabFilterCostFunction(X, Theta, Y, R, reg_lambda):
cost = 0
error = (np.dot(X, Theta.T) - Y) * R
error_sq = np.power(error, 2)
cost = np.sum(np.sum(error_sq)) / 2
cost = cost + (reg_lambda/2) * ( np.sum(np.sum((np.power(X, 2))))
+ np.sum(np.sum((np.power(Theta, 2)))) )
return cost
```
### Computation of Gradient Descent of collaborative filtering algorithm
```
# Gradient Descent
def collabFilterGradientDescent(X, Theta, Y, R, alpha, reg_lambda, num_iters):
cost_history = np.zeros([num_iters, 1])
for i in range(num_iters):
error = (np.dot(X, Theta.T) - Y) * R
X_grad = np.dot(error, Theta) + reg_lambda * X
Theta_grad = np.dot(error.T, X) + reg_lambda * Theta
X = X - alpha * X_grad
Theta = Theta - alpha * Theta_grad
cost_history[i] = collabFilterCostFunction(X, Theta, Y, R, reg_lambda)
return X, Theta, cost_history
```
## (3) Train the collborative filtering model
```
# Tune hyperparameters
alpha = 0.0001
num_iters = 25000
reg_lambda = 0
# Perform gradient descent to find optimal parameters
X, Theta = Initial_X, Initial_Theta
X, Theta, cost_history = collabFilterGradientDescent(X, Theta, Y, R, alpha, reg_lambda, num_iters)
cost = collabFilterCostFunction(X, Theta, Y, R, reg_lambda)
print("Find cost =", cost)
```
### Plot cost vs number of iterations
```
fig, axes = plt.subplots(figsize=(15,6))
axes.plot(cost_history, 'k--')
axes.set_xlabel('# of iterations')
axes.set_ylabel('Cost')
plt.show()
```
### Since we have considered only 2 genres (and hence 2 features), we plot the learned feature parameters of movies to visualize the pattern.
### We find below that the algorithm has learnt the features pretty well and hence the movies of same genre and clustered together.
### In this particular example, we considered movies of genres "Action" and "Romance". From the visualization, it can be concluded that X-axis represents "Degree of Action" and Y-axis represents "Degree of Romance".
### As a next step, we can run K-Means clustering to further verify our understanding.
```
fig, axes = plt.subplots(figsize=(10,10))
axes.scatter(X[:,0], X[:,1], color='red', marker='D')
for val, movie in zip(X, Y.index):
axes.text(val[0], val[1], movie)
axes.set_xlabel('Degree of Action')
axes.set_ylabel('Degree of Romance')
axes.set_title('Movie Matrix')
plt.show()
```
### For a random user, what are her preferred movies, and what is our recommendation for her based on result of collaborative filtering algorithm?
```
user_idx = np.random.randint(n_u)
pred_rating = []
print("Original rating of an user:\n", Y.iloc[:,user_idx].sort_values(ascending=False))
predicted_ratings = np.dot(X, Theta.T)
predicted_ratings = sorted(zip(predicted_ratings[:,user_idx], Y.index), reverse=True)
print("\nPredicted rating of the same user:")
_ = [print(rating, movie) for rating, movie in predicted_ratings]
```
| github_jupyter |
## Topic Modelling
The goal of this notebook is to find the topics on which people are talking within our dataset with tweets about vaccines. There are many models available for topic modelling, but in this Notebook we've focused only on **LDA (Latent Dirichlet Allocation)**.
For data protection purposes, the dataset used in this notebook is not provided here. If you want to replicate the notebook using this dataset, please contact the authors.
#### Input
- A dataset with tweets ready to be used by our LDA algorithm: `vacc_proc_for_topicMdl.csv`
#### Output
- An html where we can visualise the discovered topics: `Vaccs_Notts_topic_7.html`
- A dataset with tweets mapped to their main topic: `topics_mapped_Vaccs_Notts.csv`
```
# ----------------------------------------
# Libraries need to be installed
# ----------------------------------------
!pip install pyLDAvis
!pip install gensim
!pip install spacy
!python -m spacy download en_core_web_sm
# ----------------------------------------
# For File operations
# ----------------------------------------
import zipfile
import os
# ----------------------------------------
# Data read, write and other operations on Texts
# ----------------------------------------
import pandas as pd
import numpy as np
import string
import re
import unicodedata
from pprint import pprint
# ----------------------------------------
# For Libaries for NLP applications
# ----------------------------------------
import nltk
from nltk.corpus import stopwords
from nltk.util import ngrams
from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
import gensim
import spacy
spcy = spacy.load('/opt/conda/envs/Python-3.6-WMLCE/lib/python3.6/site-packages/en_core_web_sm/en_core_web_sm-2.3.1')
from gensim import corpora
from gensim.models import CoherenceModel
# ----------------------------------------
# For ignoring some warnings
# ----------------------------------------
import warnings
warnings.filterwarnings('ignore')
def wrng():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
wrng()
# ----------------------------------------
# For Visualizations
# ----------------------------------------
import matplotlib
import matplotlib.pyplot as plt
import pyLDAvis
import pyLDAvis.gensim as pygen
pyLDAvis.enable_notebook()
# ----------------------------------------
# Need to download some extras
# ----------------------------------------
nltk.download('punkt')
nltk.download('stopwords')
```
### Load Dataset
Here the datset is used from the `TopicModelling_Vaccine_Preprocessing` notebook.
```
processed_tweets_Vaccs_ = pd.read_csv("/project_data/data_asset/vacc_proc_for_topicMdl.csv")
pd.set_option('display.max_columns', None) # Showing all columns for that dataframe
```
### Filtering data related to 'Nottingham'
```
notts_tweets_Vaccs_ = processed_tweets_Vaccs_[processed_tweets_Vaccs_["City"] == "Nottingham"]
```
### Part-of-Speech tagging
Filtering words based on particular part-of-speech as other parts of speech could generate noise for topics
```
sentences = []
for line in notts_tweets_Vaccs_["Clean_sentence_Comment"]:
pos_ = spcy(line)
sentence2 = " ".join([token.text for token in pos_ if (token.pos_ == "ADJ" or token.pos_ == "NOUN" or token.pos_ == "PROPN" or token.pos_ == "VERB")])
sentences.append(sentence2)
notts_tweets_Vaccs_["Clean_sentence_Comment"] = sentences
```
### Filtering words
Filtering the least and most frequent words (filters if less than 'no_below', more than 'no_above')
```
words = [text.split() for text in notts_tweets_Vaccs_["Clean_sentence_Comment"]]
dict_words = corpora.Dictionary(words)
dict_words.filter_extremes(no_below=5, no_above=0.2)
dict_words.compactify()
myCorpus_notts = [dict_words.doc2bow(word) for word in words]
```
### Training LDA Model
Here we train the LDA model and compute the coherence metric and log-perplexity for a range of topic numbers and other hyperparameters. Here, we've focused on coherence metric to choose the best model.
```
MulLda_coherent_scores = []
MulLda_topics_val = []
MulLda_perplexity_val = []
alpha_val = [0.05, 0.1, 0.3, 0.5, 0.8, 1]
MulLda_alphas = []
for topics in range(3, 15, 2):
for alph in alpha_val:
lda_model_multi_notts = gensim.models.LdaMulticore(corpus = myCorpus_notts,
id2word = dict_words,
random_state = 42,
num_topics = topics,
passes=10,
chunksize=512,
alpha=alph,
offset=64,
eta=None,
iterations=100,
per_word_topics=True,
workers=6)
coherence_model_MulLda_notts = CoherenceModel(model = lda_model_multi_notts,
texts = words,
dictionary = dict_words,
coherence = 'c_v')
coherence_MulLda = coherence_model_MulLda_notts.get_coherence()
perplexity_MulLda = lda_model_multi_notts.log_perplexity(myCorpus_notts)
MulLda_topics_val.append(topics)
MulLda_alphas.append(alph)
MulLda_coherent_scores.append(coherence_MulLda)
MulLda_perplexity_val.append(perplexity_MulLda)
df_mulLDA_notts = pd.DataFrame(list(zip(MulLda_topics_val, MulLda_alphas, MulLda_coherent_scores, MulLda_perplexity_val)),
columns = ["MulLda_Topic_Num", "MulLda_alpha_val", "MulLda_Coherent_score", "MulLda_Perplexity_val"])
df_mulLDA_notts.sort_values("MulLda_Coherent_score", axis = 0, ascending = False,
inplace = True)
df_mulLDA_notts.head()
```
### Final Model
After choosing best hyperparams from above dataframe based on coherence metric, we can train our final model. Note that we haven't just fully relied on the highest value for this metric, but we have rather chosen the model that makes the most sense based in our experience from the top models.
The cell below will output the words related to some topics and clusters of topics (visualization).
```
multi_lda_final_notts = gensim.models.LdaMulticore(corpus = myCorpus_notts,
id2word = dict_words,
random_state = 42,
num_topics = 7,
passes=10,
chunksize=512,
alpha=0.05,
offset=64,
eta=None,
iterations=100,
per_word_topics=True,
workers=6)
pprint(multi_lda_final_notts.print_topics(num_topics = 7, num_words=20))
print("\n\033[91m" + "\033[1m" +"------- Visualization -----------\n")
lda_Mul_vis_notts = pygen.prepare(multi_lda_final_notts, myCorpus_notts, dict_words)
pyLDAvis.display(lda_Mul_vis_notts)
```
### Saving Topics as html
```
pyLDAvis.save_html(lda_Mul_vis_notts, "/project_data/data_asset/Vaccs_Notts_topic_7.html")
```
### Mapping Tweets with Topics
```
topicss = []
probss = []
for i, row in enumerate(multi_lda_final_notts[myCorpus_notts]): # gives topics probablity
row = sorted(row[0], key=lambda x :(x[1]), reverse=True) # sorting according to higher probability
for j, (topic_num, probablity) in enumerate(row): # j=0 --> containing highest probablity, topic_num --> falls under which topic
if j == 0:
topicss.append(topic_num)
probss.append(probablity)
Notts_tweets_Vaccs_["Topic_Num"] = topicss
Notts_tweets_Vaccs_["Topic_prob"] = probss
Notts_tweets_Vaccs_.head()
```
### Final Dataset
we've given the topics some names and mapped with the tweets
```
"""
list_ - values of list needs to converted to string
"""
def ListToStr(list_):
str_val = ""
for item in list_:
str_val += item
return str_val
dts = []
for dttt in Notts_tweets_Vaccs_["Date"]:
yrs_ = re.findall(r"\d{4}", dttt)
dts.append(ListToStr(yrs_))
Notts_tweets_Vaccs_["year"] = dts
Notts_tweets_Vaccs_["Date"] = pd.to_datetime(Notts_tweets_Vaccs_["Date"]).dt.date
tpc_nms = []
for tpc_ in Notts_tweets_Vaccs_["Topic_Num"].values.tolist():
if tpc_ == 0:
tpc_nms.append("Effects of virus and vaccine")
if tpc_ == 1:
tpc_nms.append("Politics in US around vaccine")
if tpc_ == 2:
tpc_nms.append("Enforcement of vaccines")
if tpc_ == 3:
tpc_nms.append("Politics in UK around vaccine")
if tpc_ == 4:
tpc_nms.append("Science around Vaccine")
if tpc_ == 5:
tpc_nms.append("Public affairs")
if tpc_ == 6:
tpc_nms.append("Distribution of vaccine and logistics")
Notts_tweets_Vaccs_["Topic_Names"] = tpc_nms
tyms = []
for tym in Notts_tweets_Vaccs_["Date"].values.tolist():
tym_ = tym.strftime('%d-%b')
tyms.append(tym_)
Notts_tweets_Vaccs_["Date_month"] = tyms
```
### Saving Final dataset
```
Notts_tweets_Vaccs_.to_csv('/project_data/data_asset/topics_mapped_Vaccs_Notts.csv', index = False)
```
### Author:
- **Ananda Pal** is a Data Scientist and Performance Test Analyst at IBM, where he specialises in Data Science and Machine Learning Solutions
Copyright © IBM Corp. 2020. Licensed under the Apache License, Version 2.0. Released as licensed Sample Materials.
| github_jupyter |
# Running Code
First and foremost, the IPython Notebook is an interactive environment for writing and running code. IPython is capable of running code in a wide range of languages. However, this notebook, and the default kernel in IPython 2.0, runs Python code.
## Code cells allow you to enter and run Python code
Run a code cell using `Shift-Enter` or pressing the <button class='btn btn-default btn-xs'><i class="icon-play fa fa-play"></i></button> button in the toolbar above:
```
a = 10
print(a)
```
There are two other keyboard shortcuts for running code:
* `Alt-Enter` runs the current cell and inserts a new one below.
* `Ctrl-Enter` run the current cell and enters command mode.
## Managing the IPython Kernel
Code is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above.
```
import time
time.sleep(10)
```
If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter:
```
import sys
from ctypes import CDLL
# This will crash a Linux or Mac system
# equivalent calls can be made on Windows
dll = 'dylib' if sys.platform == 'darwin' else 'so.6'
libc = CDLL("libc.%s" % dll)
libc.time(-1) # BOOM!!
```
## Cell menu
The "Cell" menu has a number of menu items for running code in different ways. These includes:
* Run and Select Below
* Run and Insert Below
* Run All
* Run All Above
* Run All Below
## Restarting the kernels
The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above.
## sys.stdout and sys.stderr
The stdout and stderr streams are displayed as text in the output area.
```
print("hi, stdout")
from __future__ import print_function
print('hi, stderr', file=sys.stderr)
```
## Output is asynchronous
All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.
```
import time, sys
for i in range(8):
print(i)
time.sleep(0.5)
```
## Large outputs
To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output:
```
for i in range(50):
print(i)
```
Beyond a certain point, output will scroll automatically:
```
for i in range(500):
print(2**i - 1)
```
| github_jupyter |
# Writing OSEM (or another reconstruction algorithm) yourself with SIRF
This notebook invites you to write MLEM and OSEM yourself using SIRF functionality, i.e. Do It Yourself OSEM!
You should have completed the [OSEM_reconstruction notebook](OSEM_reconstruction.ipynb) first. The [ML_reconstruction notebook](ML_reconstruction.ipynb) might help as well.
The notebook is currently set-up to use prepared data with a single slice of an XCAT phantom, with a low resolution scanner, such that all results can be obtained easily on a laptop. Of course, your code will have to work for any data.
Authors: Kris Thielemans
First version: June 2021
CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF).
Copyright 2021 University College London.
This is software developed for the Collaborative Computational Project in Synergistic Reconstruction for Biomedical Imaging (http://www.ccpsynerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```
#%% make sure figures appears inline and animations works
%matplotlib notebook
# Setup the working directory for the notebook
import notebook_setup
from sirf_exercises import cd_to_working_dir
cd_to_working_dir('PET', 'OSEM_reconstruction')
#%% Initial imports etc
import numpy
import matplotlib.pyplot as plt
import os
import sirf.STIR as pet
from sirf.Utilities import examples_data_path
from sirf_exercises import exercises_data_path
# define the directory with input files for this notebook
data_path = os.path.join(examples_data_path('PET'), 'thorax_single_slice')
# set-up redirection of STIR messages to files
msg_red = pet.MessageRedirector('info.txt', 'warnings.txt', 'errors.txt')
```
## We will first create some simulated data from ground-truth images
see previous notebooks for more information.
```
#%% Read in images
image = pet.ImageData(os.path.join(data_path, 'emission.hv'))
attn_image = pet.ImageData(os.path.join(data_path, 'attenuation.hv'))
#%% save max for future displays
im_slice = image.dimensions()[0]//2
cmax = image.max()*.6
#%% create acquisition model
acq_model = pet.AcquisitionModelUsingRayTracingMatrix()
template = pet.AcquisitionData(os.path.join(data_path, 'template_sinogram.hs'))
acq_model.set_up(template, image)
#%% simulate data using forward projection
acquired_data = acq_model.forward(image)
```
# Maximum Likelihood Expectatin Maximisation (MLEM)
Also called EMML. This is a standard algorithm, derived by using EM for the PET (or SPECT) problem. See the paper:
Shepp, L. A., and Y. Vardi. ‘Maximum Likelihood Reconstruction for Emission Tomography’. IEEE Transactions on Medical Imaging 1, no. 2 (1982): 113-122+.
Our notation here: $x$ is the image, $y$ the measured data with $A$ the system matrix. This is different from the Shepp and Vardi paper, which uses $\lambda$ for the image, $n^*$ for the measured data, $p(b,d)$ for the elements of the system matrix, and it has no background.
In our notation, the model for the mean of the data (i.e., modelling the expected measurement, given an image $x$) is
$$ \bar{y}=A x + b$$
The MLEM update is
$$ x^{\mathrm{new}} = \frac{x}{A^t 1} A^t \left(\frac{y}{A x + b}\right)$$
You hopefully recognise that the denominator of the factor on the right corresponds to the `forward` model applied ot the image $x$. Multiplication with the $A^t$ is the `backward` operation. So, we have used all the main operations already. We just need to do element-wise multiplication an division operation, but that's easy!
Let's first compute $A^t 1$, as this is a image that won't change over iterations. Note that the $1$ here represents a one-vector, i.e., an image filled with ones. It is often called the "sensivity image" as it is (proportional to) the probability that an event emitted in a voxel is detected by the scanner (without scattering).
```
sensitivity = acq_model.backward(acquired_data.get_uniform_copy(1))
```
Now we initialise the algorithm with a uniform image:
```
estimated_image = image.get_uniform_copy(1)
```
and we can do one MLEM iteration
```
quotient = acquired_data/acq_model.forward(estimated_image) # y / (Ax + b)
mult_update = acq_model.backward(quotient)/sensitivity # A^t * quotient / A^t1
estimated_image *= mult_update # update (in place)
```
And we can do some plots
```
plt.figure()
plt.subplot(1,2,1)
plt.imshow(estimated_image.as_array()[im_slice,:,:])
plt.subplot(1,2,2)
plt.imshow(mult_update.as_array()[im_slice,:,:])
```
Now you can of course duplicate some of these lines, or re-execute the above cells. However, it makes more sense to write a function to do this. Something like this:
```
def MLEM(acquired_data, acq_model, initial_image, num_iterations):
estimated_image = initial_image.clone()
# some stuff here
return estimated_image
```
And now you can run it!
```
estimated_image = MLEM(acquired_data, acq_model,image.get_uniform_copy(1), 40)
```
This was hopefully not too hard. Theree are a few problems though that you might encounter.
- your image might display nice, but on closer investigation will probably contain NaNs (Not a Number). These come from divisions: 0/0 is not defined. They can occur in 2 places:
- division in the acquisition data term. This should in theory not occur if you start with a strictly positive image that is large enough to "cover" all of the projection data. Of course, in practice it could happen that your image is smaller than the Field of View (FOV), and you might need to handle this anyway.
- division of the images. This will occur wherever the sensitivty image is zero. This difficulty is not a surprise: if a voxel cannot be measured, its ML estimate is undefined.
We have the second problem of course, as by default the projector uses a circular FOV. You might wat to add a post-processing step that sets those values to zero.
The STIR implementation of MLEM (`OSMAPOSL`) takes care of these corner cases, as well as any negatives in the data (arising when pre-correcting the data, as although this should not be done in theory, some people are interested in this anyway).
# Ordered Subsets Expectation Maximisation (OSEM)
Is discussed in previous notebooks, MLEM is great but slow. OSEM was introduced in
Hudson, H.M., and R.S. Larkin. ‘Accelerated Image Reconstruction Using Ordered Subsets of Projection Data’. IEEE Transactions on Medical Imaging 13, no. 4 (December 1994): 601–9. https://doi.org/10.1109/42.363108.
The update formula is exactly the same, except that at every update, only a subset of the data is used, i.e. restricting the data $y$, background $b$ and the matrix $A$ to a subset of all the rows. Clearly, for $N$ subsets, this reduces the number of computations for one image update with a factor $N$. While each update might be somewhat less accurate, it certainly works well in initial iterations.
So how do we implement this in SIRF? Luckily, an `sirf.STIR` acquisition model can be told to use only a subset of the data. The class documentation will show you that you can set `num_subsets` and `subset_num`.
(There is currently no way to change how the subsets are chosen, but only the number of subsets).
Note: the call to `forward` and `backward` also supports extra arguments for specifying the subsets. However, these are deprecated and will be removed in a future release.
Some interesting things to try:
- how does varying the `subset_num` affect the projection?
- how does varying the `num_subsets` affect the projection sparsity?
- what happens if you sum over all the subsets?
```
acq_model.num_subsets = 4
acq_model.subset_num = 0 # for 4 subsets, use a number between 0 and 3
data = acq_model.forward(image)
data.show(im_slice)
```
Unfortunately, SIRF currently has no way to restrict the data to a particular subset. There are 2 ways around that:
- ignore it and do the divisions over all of the data anyway. This will lead to 1/0 etc, but as those elements of the data are not backprojected, it won't change the final image.
- construct several "masks" as `AcquisitionData by forward projection an image full of ones for every subsets and doing some thresholding.
Clearly, the first option is easiest (although it does mean there is some overhead in computing extra additions/divisions). Let's see if it works ok!
```
check = acq_model.backward(acquired_data/data)
check.show(im_slice)
```
You should now be in a position to write your own OSEM algorithm. Don't forget that for a strict implementation of OSEM, you need to compute "subset sensitivities" for the division.
```
def OSEM(acquired_data, acq_model, initial_image, num_iterations):
estimated_image=initial_image.clone()
# some stuff here - hint, this will be similar to your solution for MLEM
# but you will have to additionally iterate over your subsets
return estimated_image
```
# Final remarks
Hopefully you have learned that taking an algorithm from a paper and implementing it yourself should be easy enough in SIRF. However, you probably also learned that you might encounter some subtleties that are often not so easy to spot when you read a paper.
The STIR `OSMAPOSL` implementation attempts to take care of these subtleties. It of course also avoids overheads such as the divisions with the subsets. Finally, it uses a multi-threaded implementation of the computation of the update that might be a bit faster than using calling the `forward` and `backward` operations directly (although these are multi-threaded as well).
When trying to implement an algorithm of a paper, there is often a choice at what "level" you choose for your code. In the above, we went to the projector-level. In the [ML_reconstruction notebook](ML_reconstruction.ipynb) we constructed an objective function and used the `update` and `objective_function_value` members to do a lot of the hard work. Similarly, the [MAP_EM notebook](MAP_EM.ipynb) that you could tackle now writes a MAP algorithm in terms of (OS)EM functionality. All choices will probably work ok, but there are various trade-offs between verbosity, flexibility, extendability to consider, but we won't go into that here.
| github_jupyter |
# Multi-layer FNN on MNIST
This is MLP (784-X^W-10) on MNIST. SGD algorithm (lr=0.1) with 100 epoches.
```
import os, sys
import numpy as np
from matplotlib.pyplot import *
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import seaborn as sns
import itertools
%matplotlib inline
""" Extract final stats from resman's diary file"""
def extract_num(lines0):
valid_loss_str = lines0[-5]
valid_accuracy_str = lines0[-6]
train_loss_str = lines0[-8]
train_accuracy_str = lines0[-9]
run_time_str = lines0[-10]
valid_loss = float(valid_loss_str.split( )[-1])
valid_accuracy = float(valid_accuracy_str.split( )[-1])
train_loss = float(train_loss_str.split( )[-1])
train_accuracy = float(train_accuracy_str.split( )[-1])
run_time = float(run_time_str.split( )[-1])
return valid_loss, valid_accuracy, train_loss, train_accuracy, run_time
""" Extract number of total parameters for each net config from resman's diary file"""
def parse_num_params(line0):
line_str = ''.join(lines0)
idx = line_str.find("Total params")
param_str = line_str[idx+14:idx+14+20] # 14 is the length of string "Total params: "
param_num = param_str.split("\n")[0]
return int(locale.atof(param_num))
```
#### Extract results from diary file
1. Number of params
2. Loss/Accuarcy for training/testing
3. Runing time
```
results_dir = '../results/fnn_mnist_l2_ni'
depth = [1,2,3,4,5]
width = [50,100,200,400]
dim = [0,10,50,100,200,300,350,375,400,425,450,475,500,525,550,575,600,625,650,675,700,725,750,775,800,850,900,1000,1250,1500]
########## 1. filename list of diary ########################
diary_names = []
for subdir, dirs, files in os.walk(results_dir):
for file in files:
if file == 'diary':
fname = os.path.join(subdir, file)
diary_names.append(fname)
########## 2. Construct stats (width, depth, dim) ##########
# acc_test_all : Tensor (width, depth, dim)
# num_param_all: Tensor (width, depth)
# acc_solved_all: Tensor (width, depth)
# dim_solved_all: Tensor (width, depth)
############################################################
nw, nd, nn= len(width), len(depth), len(dim)
acc_test_all = np.zeros((len(width), len(depth), len(dim)))
num_param_all = np.zeros((len(width), len(depth)))
acc_solved_all = np.zeros((len(width), len(depth)))
dim_solved_all = np.zeros((len(width), len(depth)))
mode = 1 # {0: test loss, 1: test acc}
error_files = [] # record the error file
# 2.1 construct acc_test_all and num_param_all
for id_w in range(len(width)):
w = width[id_w]
for id_ll in range(len(depth)):
ll = depth[id_ll]
for id_d in range(len(dim)):
d = dim[id_d]
# 2.1.1 Read the results,
for f in diary_names:
if '_'+str(d)+'_'+str(ll)+'_'+str(w)+'/' in f:
# print "%d is in" % d + f
with open(f,'r') as ff:
lines0 = ff.readlines()
try:
R = extract_num(lines0)
R = np.array(R)
except ValueError:
error_files.append((w,ll,d))
R = np.zeros(len(R))
print "Error. Can not read config: depth %d, width %d and dim %d." % (ll, w, d)
# break
# 2.1.2 Assign the results
r = R[mode]
acc_test_all[id_w,id_ll,id_d]=r
if d==0:
num_param_all[id_w,id_ll]=parse_num_params(lines0)
# 2.2 construct acc_solved_all and dim_solved_all
for id_w in range(len(width)):
w = width[id_w]
for id_ll in range(len(depth)):
ll = depth[id_ll]
for id_d in range(len(dim)):
d = dim[id_d]
r = acc_test_all[id_w,id_ll,id_d]
if d==0:
test_acc_bl = 1.0 # r
# print "Acc goal is: " + str(test_acc_sl) + " for network with depth " + str(ll) + " width "+ str(w)
else:
test_acc = r
if test_acc>test_acc_bl*0.9:
acc_solved_all[id_w,id_ll]=test_acc
dim_solved_all[id_w,id_ll]=d
# print "Intrinsic dim is: " + str(d) + " for network with depth " + str(ll) + " width "+ str(w)
# print "\n"
break
########## 3. Construct Tensors for Analysis (width, depth, dim) ##########
acc_base = acc_test_all[:,:,0]
acc_solve = acc_base*0.9
print "Baseline results"
print acc_test_all[:,:,0]
print "# Parmas"
print num_param_all
print "Cross-line results"
print acc_solved_all
print "Cross-line Dim"
print dim_solved_all
print "Dim %d results" % dim[-2]
print acc_test_all[:,:,-2]
```
#### List the config of depth and width, which yields errors in training
```
E_width, E_depth, E_dim = [],[],[]
for item in error_files:
E_width.append(item[0])
E_depth.append(item[1])
E_dim.append(item[2])
str_E_width = "".join(str(E_width)).replace(',', '')
str_E_depth = "".join(str(E_depth)).replace(',', '')
str_E_dim = "".join(str(E_dim)).replace(',', '')
print "Error in the following configs: width, depth and dim"
print str_E_width
print str_E_depth
print str_E_dim
print "Shape of accuracy tensor: " + str(acc_test_all.shape)
print str(acc_test_all[0,0,:])
```
-------------------------
#### Check the accuracy of specific depth and width, along different dim
```
def check_cfg_results (depth, width, lines0):
for d in dim:
# 1. read the results
for f in diary_names:
if '_'+str(d)+'_'+str(depth)+'_'+str(width)+'/' in f:
# print "%d is in" % d + f
diary_names_ordered.append(f)
with open(f,'r') as ff:
lines0 = ff.readlines()
try:
# print lines0
param_num = parse_num_params(lines0)
R = extract_num(lines0)
print R[1]
except ValueError:
print "Error: Can not read"
break
```
Reshape the tensor to 1D for plots
```
fig_width = width*len(depth)
fig_depth = list(itertools.chain.from_iterable(itertools.repeat(x, len(width)) for x in depth))
print fig_width
print fig_depth
print num_param_all
print dim_solved_all
fig_params_1d = num_param_all.reshape(len(depth)*len(width),order='F')
dim_solved_all_1d = dim_solved_all.reshape(len(depth)*len(width),order='F')
acc_solved_all_1d = acc_solved_all.reshape(len(depth)*len(width),order='F')
print fig_params_1d
print dim_solved_all_1d
```
### Testing Accuracy wrt. Width, Depth and Dim
```
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import seaborn as sns
plt.figure(figsize=(20,5.0))
for i in range(acc_test_all.shape[2]):
acc = acc_test_all[:,:,i].reshape(len(depth)*len(width),order='F')
if i==0:
plt.scatter(fig_params_1d, acc, s=(np.array(fig_width)**1.8)/100, c=fig_depth, edgecolors='k')
plt.scatter(fig_params_1d, 0.9*acc, marker="_", s=300, c='k', edgecolors='r')
else:
plt.scatter(fig_params_1d, acc, s=(np.array(fig_width)**1.8)/100, c=fig_depth, facecolors='None', linewidth=np.array(dim[i])/300.0)
ax = plt.gca()
plt.colorbar(label="Depth")
ax.set_xscale('log')
ax.grid(True)
ax.set_ylim(0.8, 1.0)
ax.set_xlim(0.3E4, 1.5E6)
plt.xlabel('# parameters')
plt.ylabel('# accuracy')
#make a legend:
pws = width
for pw in pws:
plt.scatter([], [], s=(pw**1.8)/1000, c="k",label=str(pw))
h, l = plt.gca().get_legend_handles_labels()
plt.legend(h[0:], l[0:], labelspacing=1.2, title="layer width", borderpad=1, loc='best', bbox_to_anchor=(1.25, 1),
frameon=True, framealpha=0.6, edgecolor="k", facecolor="w")
```
#### Testing Accuracy of Intrinsic dim for #parameters
```
fig = plt.figure(figsize=(16,5))
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
plt.scatter(fig_params_1d, acc_solved_all_1d, s=(np.array(fig_width)**2.0)/100, c=fig_depth, edgecolors='k')
ax = plt.gca()
plt.colorbar(label="Depth")
ax.set_xscale('log')
ax.grid(True)
ax.set_ylim(0.0, 1.0)
ax.set_xlim(0.3E5, 1.5E6)
plt.xlabel('# parameters')
plt.ylabel('# accuracy')
#make a legend:
pws = width
for pw in pws:
plt.scatter([], [], s=(pw**1.8)/100, c="k",label=str(pw))
h, l = plt.gca().get_legend_handles_labels()
plt.legend(h[0:], l[0:], labelspacing=1.2, title="layer width", borderpad=1,
frameon=True, framealpha=0.6, edgecolor="k", facecolor="w")
```
#### Intrinsic dim for #parameters
```
fig = plt.figure(figsize=(16,5))
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
plt.scatter(fig_params_1d, dim_solved_all_1d, s=(np.array(fig_width)**2.0)/100, c=fig_depth, edgecolors='k')
ax = plt.gca()
plt.colorbar(label="Depth")
ax.set_xscale('log')
ax.grid(True)
ax.set_ylim(0, 900)
plt.xlabel('# model parameters')
plt.ylabel('# intrinsic dimension')
ax.set_xlim(0.3E5, 1.5E6)
#make a legend:
pws = width
for pw in pws:
plt.scatter([], [], s=(pw**1.8)/100, c="k",label=str(pw))
h, l = plt.gca().get_legend_handles_labels()
plt.legend(h[0:], l[0:], labelspacing=1.2, title="layer width", borderpad=1,
frameon=True, framealpha=0.6, edgecolor="k", facecolor="w",loc='best')
fig.savefig("fnn_dim_global_ni.pdf", bbox_inches='tight')
```
## Performance comparison with Baseline
```
fig = plt.figure(figsize=(35,25))
fig.subplots_adjust(hspace=0.3)
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
for i in range(nw):
for j in range(nd):
if j>0:
id = i*nd+j+1
ax = plt.subplot(nw, nd, id)
plt.scatter(dim, acc_test_all[i,j,:], edgecolor="k", facecolor="w", s=60 )
ax.plot(dim, acc_test_all[i,j,0]*np.ones(nn)*0.9,'r-.', label="Testing: baseline")
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Accuracy')
ax.set_title('width %d, depth %d' %(width[i], depth[j]))
plt.grid()
ax.set_ylim([-0.1,1.1])
fig.savefig("fnn_all_configs_ni.pdf", bbox_inches='tight')
fig = plt.figure(figsize=(35,18))
fig, ax = subplots(figsize=(5,4))
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
sr = acc_test_all[0,0,0]*0.9
plt.scatter(dim[1:], acc_test_all[0,0,1:], edgecolor="k", facecolor="w" )
ax.plot(dim, acc_test_all[0,0,0]*np.ones(nn)*sr,'r-.', label="Testing: baseline")
ax.set_xlabel('Intrinsic Dimension')
ax.set_ylabel('Testing Accuracy')
# ax.set_title('width %d, depth %d' %(width[i], depth[j]))
plt.grid()
ax.set_ylim([-0.1,1.1])
# fig.set_size_inches(5, 4)
fig.savefig("fnn_mnist_ni.pdf", bbox_inches='tight')
acc_test_all.shape
```
| github_jupyter |
#**Part 1 - Data gathering and feature engineering**
**Libraries**
```
import numpy as np #Linear_Algebra
import matplotlib.pyplot as plt
import pandas as pd #Data_Processing
import pandas_datareader as pdr
from scipy import stats
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
pip install -q yfinance --upgrade
#Import Yahoo Finance
import yfinance as yf
yf.pdr_override()
#CISCO data
SELECTED_STOCK = 'CSCO'
start = '2010-12-17'
end = '2018-12-17'
#Download NVIDIA stock price data for the past 10 yrs to date
stock_data = pdr.get_data_yahoo(SELECTED_STOCK, start, end)
stock_data.head(10)
```
**Feature Engineering**
```
#Getting the Open price
stock_data_open = stock_data.Open.values
reshaped_stock_data_open = np.reshape(stock_data_open, (-1, 1))
reshaped_stock_data_open
#validity check
np.mean(reshaped_stock_data_open)==np.mean(stock_data_open)
```
#**Indicators**
##**Moving Average**
```
# Moving Averages Code
# Load the necessary packages and modules
from pandas_datareader import data as pdr
import matplotlib.pyplot as plt
import fix_yahoo_finance
import pandas as pd
# Simple Moving Average
def SMA(data, ndays):
SMA = pd.Series(data['Close'], name = 'SMA').rolling(window=ndays).mean()
data = data.join(SMA)
return data
# Exponentially-weighted Moving Average
def EWMA(data, ndays):
EMA = pd.Series((data['Close'].ewm(span=ndays).mean()),
name = 'EWMA_' + str(ndays))
data = data.join(EMA)
return data
# Retrieve the CISCO data from Yahoo finance:
data = pdr.get_data_yahoo("CSCO", start="2010-01-01", end="2019-12-16")
data = pd.DataFrame(data)
close = data['Close']
# Compute the 50-day SMA for CISCO
n = 50
SMA_CISCO = SMA(data,n)
SMA_CISCO = SMA_CISCO.dropna()
SMA = SMA_CISCO['SMA']
# Compute the 200-day EWMA for CISCO
ew = 200
EWMA_CISCO = EWMA(data,ew)
EWMA_CISCO = EWMA_CISCO.dropna()
EWMA = EWMA_CISCO['EWMA_200']
# Plotting the CISCO Price Series chart and Moving Averages below
plt.figure(figsize=(9,5))
plt.plot(data['Close'],lw=1, label='NSE Prices')
plt.plot(SMA,'g',lw=1, label='50-day SMA (green)')
plt.plot(EWMA,'r', lw=1, label='200-day EWMA (red)')
plt.legend(loc=2,prop={'size':11})
plt.grid(True)
plt.setp(plt.gca().get_xticklabels(), rotation=30)
```
##**Commodity Channel Index (CCI)**
```
from pandas_datareader import data as pdr
import matplotlib.pyplot as plt
import fix_yahoo_finance
import pandas as pd
# Commodity Channel Index
def CCI(data, ndays):
TP = (data['High'] + data['Low'] + data['Close']) / 3
CCI = pd.Series((TP - pd.Series(TP).rolling(window=ndays).mean()) / (0.015 * pd.Series(TP).rolling(window=ndays).std()),
name = 'CCI')
data = data.join(CCI)
return data
# Retrieve the CISCO data from Yahoo finance:
data = pdr.get_data_yahoo("CSCO", start="2010-01-01", end="2019-12-16")
data = pd.DataFrame(data)
# Compute the Commodity Channel Index(CCI) for CISCO based on the 20-day Moving average
n = 20
CISCO_CCI = CCI(data, n)
CCI = CISCO_CCI['CCI']
# Plotting the Price Series chart and the Commodity Channel index below
fig = plt.figure(figsize=(7,5))
ax = fig.add_subplot(2, 1, 1)
ax.set_xticklabels([])
plt.plot(data['Close'],lw=1)
plt.title('NSE Price Chart')
plt.ylabel('Close Price')
plt.grid(True)
bx = fig.add_subplot(2, 1, 2)
plt.plot(CCI,'k',lw=0.75,linestyle='-',label='CCI')
plt.legend(loc=2,prop={'size':9.5})
plt.ylabel('CCI values')
plt.grid(True)
plt.setp(plt.gca().get_xticklabels(), rotation=30)
```
**Feature Scaling**
```
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0,1))
scaled_data = sc.fit_transform(reshaped_stock_data_open)
def timestamp(n_period, scaled_data):
x_train = []
y_train = [] #1 output to predict
for i in range(n_period,len(scaled_data)):
x_train.append(scaled_data[i-n_period:i,0])
y_train.append(scaled_data[i,0])
x_train, y_train = np.array(x_train), np.array(y_train)
#reshaping
x_train_ = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
return x_train_, x_train, y_train
x_train_, x_train, y_train = timestamp(60, scaled_data)
```
#**Part 2 - Model Identification**
##**Decision Tree (Regression)**
```
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
dt = DecisionTreeRegressor()
decision_tree_regr = BaggingRegressor(dt, n_estimators=10, random_state=0)
decision_tree_regr.fit(x_train, y_train)
```
##**Recurrent Neural Network (RNN)**
```
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#Importing the keras libraries and packages
from tensorflow.python.keras.layers import Dense, LSTM, Dropout
from tensorflow.python.keras import Sequential
regressor = Sequential()
#Adding the first LSTM Layer and some Dropout regularisation
regressor.add(LSTM(units=50, return_sequences=True, input_shape = (x_train_.shape[1], 1)))
regressor.add(Dropout(rate = 0.2))
x_train.shape[1]
#Adding the second LSTM Layer and some Dropout regularisation
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(rate = 0.2))
#Adding the third LSTM Layer and some Dropout regularisation
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(rate = 0.2))
#Adding the fourth LSTM Layer and some Dropout regularisation
regressor.add(LSTM(units=50))
regressor.add(Dropout(rate = 0.2))
#Adding the output layer
regressor.add(Dense(units=1))
#compiling the RNN
regressor.compile(optimizer='adam', loss='mean_squared_error')
#fitting the RNN to the training set
regressor.fit(x_train_, y_train, epochs=50, batch_size = 32)
```
**Save the model**
```
regressor = regressor.save("regressor.h5")
```
**Load the model**
```
from tensorflow.python.keras.models import load_model
regressor = load_model("regressor.h5")
```
##**Making the predictions and visualising the results**
```
# Getting the real/test stock price of 2019
test_stock_data = pdr.get_data_yahoo(SELECTED_STOCK, start = '2018-12-18', end = '2019-12-17')
real_stock_price = test_stock_data.iloc[:, 1:2].values
dataset_total = pd.concat((stock_data['Open'], test_stock_data['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(test_stock_data) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 310): #80 because we're predicting 20 records
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price) #retranform the output because our input data was scaled between 0 and 1.
# Visualising the results
plt.plot(real_stock_price, color = 'red', label = 'Real CISCO Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted CISCO Stock Price')
plt.title('CISCO Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('CISCO Stock Price')
plt.legend()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/numpyro_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
[NumPyro](https://github.com/pyro-ppl/numpyro) is probabilistic programming language built on top of JAX. It is very similar to [Pyro](https://pyro.ai/), which is built on top of PyTorch.
However, the HMC algorithm in NumPyro
[is much faster](https://stackoverflow.com/questions/61846620/numpyro-vs-pyro-why-is-former-100x-faster-and-when-should-i-use-the-latter).
Both Pyro flavors are usually also [faster than PyMc3](https://www.kaggle.com/s903124/numpyro-speed-benchmark), and allow for more complex models, since Pyro is integrated into Python.
# Installation
```
import numpy as np
np.set_printoptions(precision=3)
import matplotlib.pyplot as plt
import math
# When running in colab pro (high RAM mode), you get 4 CPUs.
# But we need to force XLA to use all 4 CPUs
# This is generally faster than running in GPU mode
import os
os.environ["XLA_FLAGS"] = "--xla_force_host_platform_device_count=4"
# http://num.pyro.ai/en/stable/getting_started.html#installation
#CPU mode: often faster in colab!
!pip install numpyro
# GPU mode: as of July 2021, this does not seem to work
#!pip install numpyro[cuda111] -f https://storage.googleapis.com/jax-releases/jax_releases.html
import jax
print("jax version {}".format(jax.__version__))
print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform))
print(jax.lib.xla_bridge.device_count())
print(jax.local_device_count())
import jax.numpy as jnp
from jax import random
import numpyro
#numpyro.set_platform('gpu')
import numpyro.distributions as dist
from numpyro.distributions import constraints
from numpyro.distributions.transforms import AffineTransform
from numpyro.infer import MCMC, NUTS, Predictive
from numpyro.infer import SVI, Trace_ELBO, init_to_value
from numpyro.diagnostics import hpdi, print_summary
from numpyro.infer.autoguide import AutoLaplaceApproximation
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
```
# Example: 1d Gaussian with unknown mean.
We use the simple example from the [Pyro intro](https://pyro.ai/examples/intro_part_ii.html#A-Simple-Example). The goal is to infer the weight $\theta$ of an object, given noisy measurements $y$. We assume the following model:
$$
\begin{align}
\theta &\sim N(\mu=8.5, \tau^2=1.0)\\
y \sim &N(\theta, \sigma^2=0.75^2)
\end{align}
$$
Where $\mu=8.5$ is the initial guess.
## Exact inference
By Bayes rule for Gaussians, we know that the exact posterior,
given a single observation $y=9.5$, is given by
$$
\begin{align}
\theta|y &\sim N(m, s^s) \\
m &=\frac{\sigma^2 \mu + \tau^2 y}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 8.5 + 1 \times 9.5}{0.75^2 + 1^2}
= 9.14 \\
s^2 &= \frac{\sigma^2 \tau^2}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 1^2}{0.75^2 + 1^2}= 0.6^2
\end{align}
$$
```
mu = 8.5; tau = 1.0; sigma = 0.75;
hparams = (mu, tau, sigma)
y = 9.5
m = (sigma**2 * mu + tau**2 * y)/(sigma**2 + tau**2)
s2 = (sigma**2 * tau**2)/(sigma**2 + tau**2)
s = np.sqrt(s2)
print(m)
print(s)
def model(hparams, y=None):
prior_mean, prior_sd, obs_sd = hparams
theta = numpyro.sample("theta", dist.Normal(prior_mean, prior_sd))
y = numpyro.sample("y", dist.Normal(theta, obs_sd), obs=y)
return y
```
## Ancestral sampling
```
def model2(hparams):
prior_mean, prior_sd, obs_sd = hparams
theta = numpyro.sample("theta", dist.Normal(prior_mean, prior_sd))
yy = numpyro.sample("y", dist.Normal(theta, obs_sd))
return theta, yy
with numpyro.handlers.seed(rng_seed=0):
for i in range(5):
theta, yy = model2(hparams)
print([theta, yy])
```
## MCMC
See [the documentation](https://num.pyro.ai/en/stable/mcmc.html)
```
conditioned_model = numpyro.handlers.condition(model, {'y': y})
nuts_kernel = NUTS(conditioned_model)
mcmc = MCMC(nuts_kernel, num_warmup=200, num_samples=200, num_chains=4)
mcmc.run(rng_key_, hparams)
mcmc.print_summary()
samples = mcmc.get_samples()
print(type(samples))
print(type(samples['theta']))
print(samples['theta'].shape)
nuts_kernel = NUTS(model) # this is the unconditioned model
mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=1000)
mcmc.run(rng_key_, hparams, y) # we need to specify the observations here
mcmc.print_summary()
samples = mcmc.get_samples()
```
## Stochastic variational inference
See [the documentation](https://num.pyro.ai/en/stable/svi.html)
```
# the guide must have the same signature as the model
def guide(hparams, y):
prior_mean, prior_sd, obs_sd = hparams
m = numpyro.param("m", y) # location
s = numpyro.param("s", prior_sd, constraint=constraints.positive) # scale
return numpyro.sample("theta", dist.Normal(m, s))
# The optimizer wrap these, so have unusual keywords
#https://jax.readthedocs.io/en/latest/jax.experimental.optimizers.html
#optimizer = numpyro.optim.Adam(step_size=0.001)
optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)
#svi = SVI(model, guide, optimizer, Trace_ELBO(), hparams=hparams, y=y) # specify static args to model/guide
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
nsteps = 2000
svi_result = svi.run(rng_key_, nsteps, hparams, y) # or specify arguments here
print(svi_result.params)
print(svi_result.losses.shape)
plt.plot(svi_result.losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
print([svi_result.params['m'], svi_result.params['s']])
```
## Laplace (quadratic) approximation
See [the documentation](https://num.pyro.ai/en/stable/autoguide.html#autolaplaceapproximation)
```
guide_laplace = AutoLaplaceApproximation(model)
svi = SVI(model, guide_laplace, optimizer, Trace_ELBO(), hparams=hparams, y=y)
svi_run = svi.run(rng_key_, 2000)
params = svi_run.params
losses = svi_result.losses
plt.figure()
plt.plot(losses)
# Posterior is an MVN
# https://num.pyro.ai/en/stable/distributions.html#multivariatenormal
post = guide_laplace.get_posterior(params)
print(post)
m = post.mean
s = jnp.sqrt(post.covariance_matrix)
print([m, s])
samples = guide_laplace.sample_posterior(rng_key_, params, (1000,))
print_summary(samples, 0.89, False)
```
# Example: Beta-Bernoulli model
Example is from [SVI tutorial](https://pyro.ai/examples/svi_part_i.html)
The model is
$$
\begin{align}
\theta &\sim \text{Beta}(\alpha, \beta) \\
x_i &\sim \text{Ber}(\theta)
\end{align}
$$
where $\alpha=\beta=10$. In the code, $\theta$ is called
`latent_fairness`.
```
alpha0 = 10.0
beta0 = 10.0
def model(data):
f = numpyro.sample("latent_fairness", dist.Beta(alpha0, beta0))
# loop over the observed data
for i in range(len(data)):
numpyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])
# create some data with 6 observed heads and 4 observed tails
data = jnp.hstack((jnp.ones(6), jnp.zeros(4)))
print(data)
N1 = jnp.sum(data==1)
N0 = jnp.sum(data==0)
print([N1, N0])
```
## Exact inference
The posterior is given by
$$
\begin{align}
\theta &\sim \text{Ber}(\alpha + N_1, \beta + N_0) \\
N_1 &= \sum_{i=1}^N [x_i=1] \\
N_0 &= \sum_{i=1}^N [x_i=0]
\end{align}
$$
```
alpha_q = alpha0 + N1
beta_q = beta0 + N0
print('exact posterior: alpha={:0.3f}, beta={:0.3f}'.format(alpha_q, beta_q))
post_mean = alpha_q / (alpha_q + beta_q)
post_var = (post_mean * beta_q)/((alpha_q + beta_q) * (alpha_q + beta_q + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
inferred_mean = alpha_q / (alpha_q + beta_q)
# compute inferred standard deviation
factor = beta_q / (alpha_q * (1.0 + alpha_q + beta_q))
inferred_std = inferred_mean * math.sqrt(factor)
print([inferred_mean, inferred_std])
```
## Variational inference
```
def guide(data):
alpha_q = numpyro.param("alpha_q", alpha0,
constraint=constraints.positive)
beta_q = numpyro.param("beta_q", beta0,
constraint=constraints.positive)
numpyro.sample("latent_fairness", dist.Beta(alpha_q, beta_q))
#optimizer = numpyro.optim.Adam(step_size=0.001)
optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
nsteps = 2000
svi_result = svi.run(rng_key_, nsteps, data)
print(svi_result.params)
print(svi_result.losses.shape)
plt.plot(svi_result.losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
# grab the learned variational parameters
alpha_q = svi_result.params["alpha_q"]
beta_q = svi_result.params["beta_q"]
print('variational posterior: alpha={:0.3f}, beta={:0.3f}'.format(alpha_q, beta_q))
post_mean = alpha_q / (alpha_q + beta_q)
post_var = (post_mean * beta_q)/((alpha_q + beta_q) * (alpha_q + beta_q + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
```
## MCMC
```
nuts_kernel = NUTS(model) # this is the unconditioned model
mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=1000)
mcmc.run(rng_key_, data)
mcmc.print_summary()
samples = mcmc.get_samples()
```
# Distributions
## 1d Gaussian
```
# 2 independent 1d gaussians (ie 1 diagonal Gaussian)
mu = 1.5
sigma = 2
d = dist.Normal(mu, sigma)
dir(d)
rng_key, rng_key_ = random.split(rng_key)
nsamples = 1000
ys = d.sample(rng_key_, (nsamples,))
print(ys.shape)
mu_hat = np.mean(ys,0)
print(mu_hat)
sigma_hat = np.std(ys, 0)
print(sigma_hat)
```
## Multivariate Gaussian
```
mu = np.array([-1, 1])
sigma = np.array([1, 2])
Sigma = np.diag(sigma)
d2 = dist.MultivariateNormal(mu, Sigma)
#rng_key, rng_key_ = random.split(rng_key)
nsamples = 1000
ys = d2.sample(rng_key_, (nsamples,))
print(ys.shape)
mu_hat = np.mean(ys,0)
print(mu_hat)
Sigma_hat = np.cov(ys, rowvar=False) #jax.np.cov not implemented
print(Sigma_hat)
```
## Shape semantics
[Numpyro](http://num.pyro.ai/en/stable/distributions.html), [Pyro](https://pyro.ai/examples/tensor_shapes.html) and [TFP](https://www.tensorflow.org/probability/examples/Understanding_TensorFlow_Distributions_Shapes)
and [Distrax](https://github.com/deepmind/distrax)
all distinguish between 'event shape' and 'batch shape'.
For a D-dimensional Gaussian, the event shape is (D,), and the batch shape
will be (), meaning we have a single instance of this distribution.
If the covariance is diagonal, we can view this as D independent
1d Gaussians, stored along the batch dimension; this will have event shape () but batch shape (2,).
When we sample from a distribution, we also specify the sample_shape.
Suppose we draw N samples from a single D-dim diagonal Gaussian,
and N samples from D 1d Gaussians. These samples will have the same shape.
However, the semantics of logprob differs.
We illustrate this below.
```
mu = np.array([-1, 1])
sigma = np.array([1, 2])
Sigma = np.diag(sigma)
d2 = dist.MultivariateNormal(mu, Sigma)
print(f'event shape {d2.event_shape}, batch shape {d2.batch_shape}')
nsamples = 3
ys2 = d2.sample(rng_key_, (nsamples,))
print('samples, shape {}'.format(ys2.shape))
print(ys2)
# 2 independent 1d gaussians (same as one 2d diagonal Gaussian)
d3 = dist.Normal(mu, scale=np.sqrt(np.diag(Sigma))) # scalar Gaussian needs std not variance
print(f'event shape {d3.event_shape}, batch shape {d3.batch_shape}')
ys3 = d3.sample(rng_key_, (nsamples,))
print('samples, shape {}'.format(ys3.shape))
print(ys3)
print(np.allclose(ys2, ys3))
y = ys2[0,:] # 2 numbers
print(d2.log_prob(y)) # log prob of a single 2d distribution on 2d input
print(d3.log_prob(y)) # log prob of two 1d distributions on 2d input
```
We can turn a set of independent distributions into a single product
distribution using the [Independent class](http://num.pyro.ai/en/stable/distributions.html#independent)
```
d4 = dist.Independent(d3, 1) # treat the first batch dimension as an event dimensions
# now d4 is just like d2
print(f'event shape {d4.event_shape}, batch shape {d4.batch_shape}')
print(d4.log_prob(y))
```
| github_jupyter |
## Description:
This script creates Figure S2
```
import numpy as np
import netCDF4 as nc
import datetime as dt
import pandas as pd
from sklearn.cluster import KMeans
#import mpl_toolkits.mplot3d as mpl3d
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import cartopy
import cartopy.crs as ccrs
# for shapefile
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
%matplotlib inline
rootdir = '/raid1/chen423/serdp/archive/GRL2018/'
def get_nc_data(infile, var):
tmpgroup = nc.Dataset(infile, 'r', format='NETCDF4')
outdata = tmpgroup.variables[var][:]
tmpgroup.close()
return outdata
```
#### AR classification
```
def retrieve_ARclass(method):
file_ffeature = rootdir+'data/AR_features/part2/%s.AR_events_feature.1981-2015.nc' % (method)
ARfeature_full = get_nc_data(file_ffeature, 'AR_event_feature')
file_class = rootdir+'data/AR_classification/AR_3class.%s.nc' % (method)
AR_class_index = get_nc_data(file_class, 'ARclass_index')
ARfeature_norm = get_nc_data(file_class, 'ARfeature_norm')
return AR_class_index, ARfeature_full, ARfeature_norm
```
#### misc functions for data processing
```
def tindex_to_monthlyindex(index):
stime = dt.datetime(1981,1,1,0)
time_delta = dt.timedelta(hours=3*index)
etime = stime + time_delta
return (etime.year-1981)*12+etime.month-1 # -1 so it is conssistent with index that starts from 0 in 1981-01
def calc_lag_corraltion(clim_index, indata, lag=0):
outdata = np.zeros(1080)
full_len = clim_index.shape[0]
for i in np.arange(1080):
outdata[i] = np.corrcoef(clim_index[0:(full_len-lag)], indata[lag:(full_len),i])[0,1]
return outdata
```
#### AR statistics
```
def sub_AR_monthly_nevents(cclass, AR_class_index, ARfeature_fulldata):
outdata_counts = np.zeros(420)
for i in np.arange(AR_class_index.shape[0]):
mindex = tindex_to_monthlyindex(ARfeature_fulldata[i,8])
if cclass=='whole':
outdata_counts[mindex] = outdata_counts[mindex] + 1
else:
if AR_class_index[i]==cclass:
outdata_counts[mindex] = outdata_counts[mindex] + 1
outdata_sig = outdata_counts.copy()
outdata_sig[outdata_counts>=1] = 1
return outdata_counts, outdata_sig
def sub_AR_monthly_accum_IntDur(cclass, AR_class_index, ARfeature_fulldata):
# accumulation of Intensity*Duration
outdata = np.zeros(420)
for i in np.arange(AR_class_index.shape[0]):
mindex = tindex_to_monthlyindex(ARfeature_fulldata[i,8])
if cclass=='whole':
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,3]*ARfeature_fulldata[i,7]
else:
if AR_class_index[i]==cclass:
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,3]*ARfeature_fulldata[i,7]
return outdata
def sub_AR_monthly_accum_IntDurAre(cclass, AR_class_index, ARfeature_fulldata):
# accumulation of Intensity*Duration*Area_land
outdata = np.zeros(420)
for i in np.arange(AR_class_index.shape[0]):
mindex = tindex_to_monthlyindex(ARfeature_fulldata[i,8])
if cclass=='whole':
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,3]*ARfeature_fulldata[i,7]*ARfeature_fulldata[i,1]
else:
if AR_class_index[i]==cclass:
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,3]*ARfeature_fulldata[i,7]*ARfeature_fulldata[i,1]
return outdata
def sub_AR_monthly_accum_IntDurWid(cclass, AR_class_index, ARfeature_fulldata):
# accumulation of Intensity*Duration*Width_coast
outdata = np.zeros(420)
for i in np.arange(AR_class_index.shape[0]):
mindex = tindex_to_monthlyindex(ARfeature_fulldata[i,8])
if cclass=='whole':
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,5]*ARfeature_fulldata[i,7]*ARfeature_fulldata[i,4]
else:
if AR_class_index[i]==cclass:
outdata[mindex] = outdata[mindex] + ARfeature_fulldata[i,5]*ARfeature_fulldata[i,7]*ARfeature_fulldata[i,4]
return outdata
def get_AR_stats(method):
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
# on the first index: 0-2 are three AR types. 3 is the whole stats.
AR_monthly_nevents = np.zeros((4,420))
AR_monthly_sig = np.zeros((4,420))
AR_monthly_nevents[3,:], AR_monthly_sig[3,:] = sub_AR_monthly_nevents('whole', AR_class_index, ARfeature_full)
AR_monthly_nevents[0,:], AR_monthly_sig[0,:] = sub_AR_monthly_nevents(0, AR_class_index, ARfeature_full)
AR_monthly_nevents[1,:], AR_monthly_sig[1,:] = sub_AR_monthly_nevents(1, AR_class_index, ARfeature_full)
AR_monthly_nevents[2,:], AR_monthly_sig[2,:] = sub_AR_monthly_nevents(2, AR_class_index, ARfeature_full)
AR_mon_acc_ida = np.zeros((4,420))
AR_mon_acc_ida[3,:] = sub_AR_monthly_accum_IntDurAre('whole', AR_class_index, ARfeature_full)
AR_mon_acc_ida[0,:] = sub_AR_monthly_accum_IntDurAre(0, AR_class_index, ARfeature_full)
AR_mon_acc_ida[1,:] = sub_AR_monthly_accum_IntDurAre(1, AR_class_index, ARfeature_full)
AR_mon_acc_ida[2,:] = sub_AR_monthly_accum_IntDurAre(2, AR_class_index, ARfeature_full)
return AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida
def sub_AR_daily_sig(cclass, AR_class_index, ARfeature_full, totaldays, lag=0):
outdata = np.zeros(totaldays)
for i in np.arange(AR_class_index.shape[0]):
sindex = (dt.timedelta(hours=3*ARfeature_full[i,6])).days
eindex = (dt.timedelta(hours=3*(ARfeature_full[i,6])+ARfeature_full[i,7])).days + lag
if cclass=='whole':
outdata[sindex:(eindex+1)] = np.ones(np.minimum(eindex-sindex+1, totaldays-sindex))
else:
if AR_class_index[i]==cclass:
outdata[sindex:(eindex+1)] = np.ones(np.minimum(eindex-sindex+1, totaldays-sindex))
return outdata
```
##### hydrological data processing
```
def calc_extreme_sum_monthly(dailyinput, pvalue):
print(pvalue)
tindex_daily = pd.date_range('1/1/1981', periods=dailyinput.shape[0])
out_count = np.zeros((420,dailyinput.shape[1]))
for i in np.arange(dailyinput.shape[1]):
tmpdata = dailyinput[:,i].copy()
threshold = np.percentile(tmpdata, pvalue*100)
tmpdata[tmpdata<threshold]='NaN'
tmpdata_tagged = pd.Series(tmpdata, index=tindex_daily)
out_count[:,i] = tmpdata_tagged.resample('M').sum()
return out_count
def calc_extreme_daily_sig(dailyinput, pvalue):
print(pvalue)
out_sig = np.zeros(dailyinput.shape)
for i in np.arange(dailyinput.shape[1]):
tmpdata = dailyinput[:,i].copy()
threshold = np.percentile(tmpdata, pvalue*100)
tmpdata[tmpdata<threshold]=0
tmpdata[tmpdata>=threshold]=1
out_sig[:,i] = tmpdata
return out_sig
def gen_custom_positive_anomaly(indata, approach, pvalue=0.5):
# indata: ()
lens = indata.shape[0]
outdata = np.zeros(indata.shape)
for i in np.arange(indata.shape[1]): # loop over basins
if approach=='mean':
baseline = np.mean(indata[:,i])
elif approach=='percentile':
baseline = np.percentile(indata[:,i], pvalue*100)
outdata[:,i] = indata[:,i]-baseline
outdata[outdata<0] = 0
return outdata
```
#### computation of metrics
```
def compute_mean_correaltions_vlags(ARstats, hydrovar, lags=np.arange(7)):
sbasin = 734
ebasin = 1080
npts = lags.shape[0]
corr_stats = np.zeros((npts,5))
for i in np.arange(npts):
corrdata = calc_lag_corraltion(ARstats, hydrovar, lags[i])
corr_stats[i,0] = np.min(corrdata[sbasin:ebasin])
corr_stats[i,1] = np.percentile(corrdata[sbasin:ebasin], 25)
corr_stats[i,2] = np.mean(corrdata[sbasin:ebasin])
corr_stats[i,3] = np.percentile(corrdata[sbasin:ebasin], 75)
corr_stats[i,4] = np.max(corrdata[sbasin:ebasin])
return corr_stats
def calc_binary_scores(ARdata, hydrodata, metric):
tmpdata = hydrodata+ARdata
yy = (tmpdata==2).sum()
nn = (tmpdata==0).sum()
# yn, ARdata==1, hydrodata==0
tmpdata = ARdata-hydrodata
yn = (tmpdata==1).sum()
ny = (tmpdata==-1).sum()
if metric=='POD':
outvalue = yy/(yy + ny)
elif metric=='FAR':
outvalue = yn/(yy + yn)
elif metric=='Bias':
outvalue = (yy + yn)/(yy + ny)
elif metric=='HSS':
outvalue = 2*(yy*nn-yn*ny)/((yy+ny)*(ny+nn)+(yy+yn)*(yn+nn))
elif metric=='TS':
outvalue = yy/(yy + ny + yn)
elif metric=='GSS':
ets_tmp = (yy + yn)*(yy + ny)/(yy + ny + yn + nn)
outvalue= (yy - ets_tmp)/(yy + ny + yn - ets_tmp)
return outvalue
def wrap_calc_binary_score(AR_daily_sig, dailyhydro, metric):
outdata = np.zeros(dailyhydro.shape[1])
for i in np.arange(dailyhydro.shape[1]):
outdata[i] = calc_binary_scores(AR_daily_sig, dailyhydro[:,i], metric)
return outdata
```
#### plotting
```
def plot_single_range(axes, xdata, ydatas, color, location, title, yrange=[-1,1]):
axes.fill_between(xdata, ydatas[:,0], ydatas[:,4], facecolor=color, alpha=0.2)
axes.fill_between(xdata, ydatas[:,1], ydatas[:,3], facecolor=color, alpha=0.6)
axes.plot(ydatas[:,2], color, linewidth=4, alpha=1)
axes.plot(np.arange(10), np.zeros(10), color='black', linestyle='--')
axes.set_xlim(xrange)
axes.set_ylim(yrange)
# specific to locations
if location=='l':
axes.set_xticks([])
axes.set_ylabel('corr. coeff.', size=12)
elif location=='d':
axes.set_xlabel('lags (month)', size=12)
axes.set_yticks([])
elif location=='ld':
axes.set_xlabel('lags (month)', size=12)
axes.set_ylabel('corr. coeff.', size=12)
elif location=='o':
axes.set_xticks([])
axes.set_yticks([])
elif location=='s':
axes.set_ylabel('corr. coeff.', size=12)
axes.set_xlabel('lag (month)', size=12)
axes.text(xrange[1]/2, 0.8, title, horizontalalignment='center', size=10)
def panel_plot(axes, ARstats, hydrovar, lag):
mapdata = calc_lag_corraltion(ARstats[0], hydrovar, lag)
axes.plot(mapdata, color='royalblue', label='weak AR', zorder=0)
mapdata = calc_lag_corraltion(ARstats[2], hydrovar, lag)
axes.plot(mapdata, color='orange', label='prolonged AR', zorder=2)
mapdata = calc_lag_corraltion(ARstats[1], hydrovar, lag)
axes.plot(mapdata, color='lightseagreen', label='flash AR', zorder=1)
axes.plot(np.arange(1100), np.zeros(1100), 'black', linestyle='--')
axes.set_xlim(xrange)
axes.set_ylim(yrange)
axes.legend(loc='upper center', ncol=2, fontsize=10)
def panel_plot_1class(axes, ARstats, hydrovar, lag):
mapdata = calc_lag_corraltion(ARstats[3], hydrovar, lag)
axes.plot(mapdata, color='black', label='weak AR', alpha=0.8)
axes.plot(np.arange(1100), np.zeros(1100), 'blue', linestyle='--')
axes.set_xlim(xrange)
axes.set_ylim(yrange)
reffile = rootdir+'data/ref_data/HU8_wUS.red005.nc'
HU8_mask = get_nc_data(reffile, 'Band1')
HU8_mask = get_nc_data(reffile, 'Band1')
lat = get_nc_data(reffile, 'lat')
lon = get_nc_data(reffile, 'lon')
hu8_list = np.genfromtxt(rootdir+'data/ref_data/hu8_list')[:,1]
lons, lats = np.meshgrid(lon, lat)
def generate_plot_data_matrix(indata):
plot_data_matrix = np.ones(lons.shape)*9999
for i in np.arange(1080):
hu8id = hu8_list[i]
plot_data_matrix[HU8_mask==hu8id] = indata[i]
plot_data_matrix[plot_data_matrix==9999] = np.nan
return plot_data_matrix
```
#### hydrological data
```
# all the daily data
dailyP_file = rootdir+'data/hydro_data/PRISM/PRISM.HUC8.P.1981-2015.daily.nc'
dailyP = get_nc_data(dailyP_file, 'P')
monthly_p95P_sum = calc_extreme_sum_monthly(dailyP, 0.95)
daily_p95P_sig = calc_extreme_daily_sig(dailyP, 0.95)
```
## plotting
#### fig2. plot relationship between AR and P. So no lag is needed
```
xrange = [734,1080]
lag = 0
yrange = [-0.4,1]
Pdata = monthly_p95P_sum
title = 'corr(AR-IDA, 95% P total)'
fig2 = plt.figure(figsize=(12,8))
method = 'rutz'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax1 = plt.subplot(2,3,1)
panel_plot(ax1, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax1, AR_mon_acc_ida, Pdata, 0)
ax1.plot((954,954), (-2,0.74), '-.', color='black')
ax1.text(800, -0.3, method, horizontalalignment='center', size=12)
ax1.set_title(title, size=12)
ax1.set_ylabel('corr. coeff.', size=12)
ax1.set_xticklabels([])
ax1.set_ylim(yrange)
method = 'gershunov'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax2 = plt.subplot(2,3,2)
panel_plot(ax2, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax2, AR_mon_acc_ida, Pdata, 0)
ax2.plot((954,954), (-2,0.74), '-.', color='black')
ax2.text(800, -0.33, method, horizontalalignment='center', size=12)
ax2.set_title(title, size=12)
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_ylim(yrange)
method = 'guan'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax3 = plt.subplot(2,3,3)
panel_plot(ax3, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax3, AR_mon_acc_ida, Pdata, 0)
ax3.plot((954,954), (-2,0.74), '-.', color='black')
ax3.text(800, -0.33, method, horizontalalignment='center', size=12)
ax3.set_title(title, size=12)
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_ylim(yrange)
method = 'goldenson'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax4 = plt.subplot(2,3,4)
panel_plot(ax4, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax4, AR_mon_acc_ida, Pdata, 0)
ax4.plot((954,954), (-2,0.74), '-.', color='black')
ax4.text(800, -0.33, method, horizontalalignment='center', size=12)
ax4.set_title(title, size=12)
ax4.set_xticks([846,1017])
ax4.set_xlabel('HUC8 watersheds', size=13)
ax4.set_xticklabels({'PNW','California'})
ax4.set_ylabel('corr. coeff.', size=12)
ax4.set_ylim(yrange)
method = 'pnnl1'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax5 = plt.subplot(2,3,5)
panel_plot(ax5, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax5, AR_mon_acc_ida, Pdata, 0)
ax5.plot((954,954), (-2,0.74), '-.', color='black')
ax5.text(800, -0.33, method, horizontalalignment='center', size=12)
ax5.set_title(title, size=12)
ax5.set_xticks([846,1017])
ax5.set_xlabel('HUC8 watersheds', size=13)
ax5.set_xticklabels({'PNW','California'})
ax5.set_yticklabels([])
ax5.set_ylim(yrange)
method = 'pnnl2'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
ax6 = plt.subplot(2,3,6)
panel_plot(ax6, AR_mon_acc_ida, Pdata, 0)
#panel_plot_1class(ax6, AR_mon_acc_ida, Pdata, 0)
ax6.plot((954,954), (-2,0.74), '-.', color='black')
ax6.text(800, -0.33, method, horizontalalignment='center', size=12)
ax6.set_title(title, size=12)
ax6.set_xticks([846,1017])
ax6.set_xlabel('HUC8 watersheds', size=13)
ax6.set_xticklabels({'PNW','California'})
ax6.set_yticklabels([])
ax6.set_ylim(yrange)
plt.show()
def visualize_wUS_map(axes, indata, location, title='', method='', ylim=[26,55], hu2bdy_flag=False, cmap='Blues', vmin=-0.6, vmax=0.6):
axes.pcolormesh(lons, lats, indata, cmap=cmap, vmin=vmin, vmax=vmax)
axes.set_xlim([-127, -100])
axes.set_ylim(ylim)
axes.add_feature(cartopy.feature.OCEAN, linewidth=0.5, facecolor='aliceblue', edgecolor='k', zorder=0)
axes.add_feature(cartopy.feature.LAND, linewidth=0.5, facecolor='none', edgecolor='k', zorder=1)
if hu2bdy_flag==True:
shpfile = rootdir+'data/ref_data/HUC/HU2_wUS_R07-R18.shp'
shape_feature = ShapelyFeature(Reader(shpfile).geometries(), ccrs.PlateCarree(),
facecolor='none', edgecolor='gray', linewidth=0.5)
axes.add_feature(shape_feature)
countries = cartopy.feature.NaturalEarthFeature(category='cultural', scale='10m', edgecolor='black', linewidth=0.25,\
facecolor='none', name='admin_0_countries')
axes.add_feature(countries, zorder=3)
gl = axes.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linestyle='-', alpha=1)
gl.xlabels_top = location[0]
gl.xlabels_bottom = location[1]
gl.ylabels_left = location[2]
gl.ylabels_right = location[3]
gl.xlocator = matplotlib.ticker.FixedLocator(np.arange(-180,-59,10))
gl.ylocator = matplotlib.ticker.FixedLocator(np.arange(0,81,5))
gl.xformatter = cartopy.mpl.gridliner.LONGITUDE_FORMATTER
gl.yformatter = cartopy.mpl.gridliner.LATITUDE_FORMATTER
axes.text(-108, 53, method, horizontalalignment='center', size=13, zorder=4)
axes.set_title(title, size=13)
Pdata = monthly_p95P_sum
fig3 = plt.figure(figsize=(12,8))
ax1 = plt.subplot2grid((2,10), (0,0), colspan=3, projection=ccrs.PlateCarree())
ax2 = plt.subplot2grid((2,10), (0,3), colspan=3, projection=ccrs.PlateCarree())
ax3 = plt.subplot2grid((2,10), (0,6), colspan=3, projection=ccrs.PlateCarree())
ax4 = plt.subplot2grid((2,10), (1,0), colspan=3, projection=ccrs.PlateCarree())
ax5 = plt.subplot2grid((2,10), (1,3), colspan=3, projection=ccrs.PlateCarree())
ax6 = plt.subplot2grid((2,10), (1,6), colspan=3, projection=ccrs.PlateCarree())
title = 'corr(AR-IDA, 95% P total)'
method = 'rutz'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax1, plot_data, location=[False,False,True,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
method = 'gershunov'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax2, plot_data, location=[False,False,False,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
method = 'guan'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax3, plot_data, location=[False,False,False,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
method = 'goldenson'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax4, plot_data, location=[False,True,True,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
method = 'pnnl1'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax5, plot_data, location=[False,True,False,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
method = 'pnnl2'
AR_monthly_nevents, AR_monthly_sig, AR_mon_acc_ida = get_AR_stats(method)
corrdata_raw = calc_lag_corraltion(AR_mon_acc_ida[3], Pdata, 0)
plot_data = generate_plot_data_matrix(corrdata_raw)
visualize_wUS_map(ax6, plot_data, location=[False,True,False,False], title=title, method=method, hu2bdy_flag=True, cmap='bwr_r')
print(method+' done')
cbar_axes = fig3.add_axes([0.86, 0.15, 0.02, 0.7])
cb = matplotlib.colorbar.ColorbarBase(cbar_axes, cmap='bwr_r', ticks=[np.arange(0,1.01,0.25)], orientation='vertical')
cb.set_ticklabels(['-0.6', '-0.3', '0', '0.3', '0.6'])
cbar_axes.tick_params(labelsize=11)
#fig3.savefig(rootdir+'plots/misc05.map.corr.whole.PRISM.png', dpi=600)
plt.show()
plt.close()
del(fig3)
```
#### GSS/HSS plot
to check the possibility of using daily AR to forecast extreme P events
```
def panel_binary_score(axes, method, score, ylabel='', cat='none'):
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
if cat=='single':
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
score_p95Pd_AR = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
axes.plot(score_p90Pd_AR, color='grey', label='90% P, AR')
axes.plot(score_p95Pd_AR, color='grey', linestyle='--', label='95% P, AR')
else:
AR0_daily_sig = sub_AR_daily_sig(0, AR_class_index, ARfeature_full, totaldays, lag=0)
AR1_daily_sig = sub_AR_daily_sig(1, AR_class_index, ARfeature_full, totaldays, lag=0)
AR2_daily_sig = sub_AR_daily_sig(2, AR_class_index, ARfeature_full, totaldays, lag=0)
score_p95Pd_AR0 = wrap_calc_binary_score(AR0_daily_sig, daily_p95P_sig, score)
score_p95Pd_AR1 = wrap_calc_binary_score(AR1_daily_sig, daily_p95P_sig, score)
score_p95Pd_AR2 = wrap_calc_binary_score(AR2_daily_sig, daily_p95P_sig, score)
axes.plot(score_p95Pd_AR0, color='royalblue', linestyle='--', label='95% P, w.AR')
axes.plot(score_p95Pd_AR1, color='lightseagreen', linestyle='--', label='95% P, f.AR')
axes.plot(score_p95Pd_AR2, color='orange', linestyle='--', label='95% P, p.AR')
axes.plot(np.arange(1080), np.zeros(1080), 'k')
if score=='HSS':
axes.plot((954,954), (-2,0.45), '-.', color='black')
elif score=='GSS':
axes.plot((954,954), (-2,0.25), '-.', color='black')
axes.set_xlim(xrange)
axes.set_ylim(yrange)
axes.legend(loc='upper center', ncol=2, fontsize=9, frameon=False)
axes.set_ylabel(ylabel, size=13)
axes.set_title('%s : %s' % (score, method), size=14)
xrange = [734,1080]
totaldays = dailyP.shape[0]
#score = 'GSS'
#ylabel = 'Gilbert Skill Score (GSS)'
#yrange = [-0.15, 0.35] # GSS
score = 'HSS'
ylabel = 'Heidke Skill Score (HSS)'
yrange = [-0.15,0.6] # HSS
fig4 = plt.figure(figsize=(12,8))
ax1 = plt.subplot2grid((2,3),(0,0))
ax2 = plt.subplot2grid((2,3),(0,1))
ax3 = plt.subplot2grid((2,3),(0,2))
ax4 = plt.subplot2grid((2,3),(1,0))
ax5 = plt.subplot2grid((2,3),(1,1))
ax6 = plt.subplot2grid((2,3),(1,2))
panel_binary_score(ax1, 'rutz', score, ylabel=ylabel, cat='none')
panel_binary_score(ax2, 'gershunov', score, cat='none')
panel_binary_score(ax3, 'guan', score, cat='none')
panel_binary_score(ax4, 'goldenson', score, ylabel=ylabel, cat='none')
panel_binary_score(ax5, 'pnnl1', score, cat='none')
panel_binary_score(ax6, 'pnnl2', score, cat='none')
ax1.set_xticks([])
ax2.set_xticks([])
ax3.set_xticks([])
ax2.set_yticks([])
ax3.set_yticks([])
ax5.set_yticks([])
ax6.set_yticks([])
ax4.set_xticks([846,1017])
ax4.set_xticklabels({'PNW','California'})
ax4.set_xlabel('HUC8 watersheds', size=13)
ax5.set_xticks([846,1017])
ax5.set_xticklabels({'PNW','California'})
ax5.set_xlabel('HUC8 watersheds', size=13)
ax6.set_xticks([846,1017])
ax6.set_xticklabels({'PNW','California'})
ax6.set_xlabel('HUC8 watersheds', size=13)
plt.tight_layout()
plt.show()
```
#### GSS/HSS map
see which regions are more predicable based on AR
```
def crt_MS_norm_colormap(cmapname):
full_info = {'precip3_9segs':['#ffffff', '#b5c9ff', '#7f96ff', '#0063ff', '#00c633', '#96ff00', '#ffff00', '#ffa000', '#ff1900']
}
if cmapname=='demo':
print(full_info.get('demo'))
else:
return matplotlib.colors.ListedColormap(full_info.get(cmapname))
def visualize_wUS_map_full_cus_cbar(axes, indata, location, cmap, norm, title='', method='', ylim=[26,55], hu2bdy_flag=False):
axes.pcolormesh(lons, lats, indata, cmap=cmap, norm=norm)
axes.set_xlim([-127, -100])
axes.set_ylim(ylim)
axes.add_feature(cartopy.feature.OCEAN, linewidth=0.5, facecolor='aliceblue', edgecolor='k', zorder=0)
axes.add_feature(cartopy.feature.LAND, linewidth=0.5, facecolor='none', edgecolor='k', zorder=1)
if hu2bdy_flag==True:
shpfile = rootdir+'data/ref_data/HUC/HU2_wUS_R07-R18.shp'
shape_feature = ShapelyFeature(Reader(shpfile).geometries(), ccrs.PlateCarree(),
facecolor='none', edgecolor='gray', linewidth=0.5)
axes.add_feature(shape_feature)
countries = cartopy.feature.NaturalEarthFeature(category='cultural', scale='10m', edgecolor='black', linewidth=0.25,\
facecolor='none', name='admin_0_countries')
axes.add_feature(countries, zorder=3)
gl = axes.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linestyle='-', alpha=1)
gl.xlabels_top = location[0]
gl.xlabels_bottom = location[1]
gl.ylabels_left = location[2]
gl.ylabels_right = location[3]
gl.xlocator = matplotlib.ticker.FixedLocator(np.arange(-180,-59,10))
gl.ylocator = matplotlib.ticker.FixedLocator(np.arange(0,81,5))
gl.xformatter = cartopy.mpl.gridliner.LONGITUDE_FORMATTER
gl.yformatter = cartopy.mpl.gridliner.LATITUDE_FORMATTER
axes.text(-108, 53, method, horizontalalignment='center', size=13, zorder=4)
axes.set_title(title, size=13)
totaldays = dailyP.shape[0]
score = 'GSS'
title = 'GSS (AR -> 95% daily P)'
fig5 = plt.figure(figsize=(12,8))
ax1 = plt.subplot2grid((2,10), (0,0), colspan=3, projection=ccrs.PlateCarree())
ax2 = plt.subplot2grid((2,10), (0,3), colspan=3, projection=ccrs.PlateCarree())
ax3 = plt.subplot2grid((2,10), (0,6), colspan=3, projection=ccrs.PlateCarree())
ax4 = plt.subplot2grid((2,10), (1,0), colspan=3, projection=ccrs.PlateCarree())
ax5 = plt.subplot2grid((2,10), (1,3), colspan=3, projection=ccrs.PlateCarree())
ax6 = plt.subplot2grid((2,10), (1,6), colspan=3, projection=ccrs.PlateCarree())
cmap = crt_MS_norm_colormap('precip3_9segs')
norm = matplotlib.colors.Normalize(vmin=-0.025,vmax=0.2)
method = 'rutz'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax1, plot_data, location=[False,False,True,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
method = 'gershunov'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax2, plot_data, location=[False,False,False,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
method = 'guan'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax3, plot_data, location=[False,False,False,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
method = 'goldenson'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax4, plot_data, location=[False,True,True,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
method = 'pnnl1'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax5, plot_data, location=[False,True,False,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
method = 'pnnl2'
AR_class_index, ARfeature_full, ARfeature_norm = retrieve_ARclass(method)
AR_daily_sig = sub_AR_daily_sig('whole', AR_class_index, ARfeature_full, totaldays, lag=0)
scoredata = wrap_calc_binary_score(AR_daily_sig, daily_p95P_sig, score)
scoredata[scoredata<0] = -0.01
plot_data = generate_plot_data_matrix(scoredata)
visualize_wUS_map_full_cus_cbar(ax6, plot_data, location=[False,True,False,False], cmap=cmap, norm=norm,
title=title, method=method, hu2bdy_flag=True)
print(method+' done')
cbar_axes = fig5.add_axes([0.86, 0.15, 0.02, 0.7])
cb = matplotlib.colorbar.ColorbarBase(cbar_axes, cmap=cmap, ticks=[np.arange(0,1.01,1/9)], orientation='vertical')
#cb.set_ticklabels(['<0', '0', '0.05', '0.1', '0.15', '0.2', '0.25', '0.3', '0.35', '0.4'])
cb.set_ticklabels(['<0', '0', '0.025', '0.05', '0.075', '0.1', '0.125', '0.15', '0.175', '0.2'])
cbar_axes.tick_params(labelsize=11)
#fig5.savefig(rootdir+'plots/misc06.map.GSS.whole.PRISM.png', dpi=600)
plt.show()
plt.close()
del(fig5)
```
## verify PRISM and WRF P
```
# all the daily data
dailyP_file = rootdir+'data/hydro_data/PRISM/PRISM.HUC8.P.1981-2015.daily.nc'
dailyP_PRISM = get_nc_data(dailyP_file, 'P')
dailyP_file = rootdir+'data/hydro_data/WRF/NARR_hist.HUC8.P.nc'
dailyP_WRF = get_nc_data(dailyP_file, 'P')
p95P_PRISM = np.percentile(dailyP_PRISM, 95,axis=0)
p95P_WRF = np.percentile(dailyP_WRF, 95, axis=0)
mean_PRISM = np.mean(dailyP_PRISM, axis=0)
mean_WRF = np.mean(dailyP_WRF, axis=0)
fig6 = plt.figure(figsize=(7,3))
ax1 = plt.subplot2grid((10,7), (0,0), rowspan=9,colspan=3)
ax1.scatter(mean_PRISM, mean_WRF, s=0.5)
ax1.plot(np.arange(11), np.arange(11), linewidth=1, linestyle='--', color='black')
ax1.set_xlim([0,10])
ax1.set_ylim([0,10])
ax1.set_xlabel('PRISM P (mm)', size=12)
ax1.set_ylabel('WRF P (mm)', size=12)
ax1.set_title('(a) mean daily P over HUC8', size=12)
ax1.text(1,8.5, r'$R^2=%.2f$' % (np.corrcoef(mean_PRISM, mean_WRF)[0,1]**2), size=14)
ax2 = plt.subplot2grid((10,7), (0,4), rowspan=9, colspan=3)
ax2.scatter(p95P_PRISM, p95P_WRF, s=0.5)
ax2.plot(np.arange(51), np.arange(51), linewidth=1, linestyle='--', color='black')
ax2.set_xlim([0,50])
ax2.set_ylim([0,50])
ax2.set_xlabel('PRISM P (mm)', size=12)
ax2.set_ylabel('WRF P (mm)', size=12)
ax2.set_title('(b) 95% daily P over HUC8', size=12)
ax2.text(5,42, r'$R^2=%.2f$' % (np.corrcoef(p95P_PRISM, p95P_WRF)[0,1]**2), size=14)
#fig6.savefig(rootdir+'plots/figS2.png', dpi=600)
plt.show()
plt.close()
del(fig6)
print(np.corrcoef(mean_PRISM, mean_WRF)[0,1])
print(np.corrcoef(p95P_PRISM, p95P_WRF)[0,1])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cbadenes/notebooks/blob/main/probabilistic_topic_models/TBFY_Crosslingual_SearchAPI.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
A **cross-lingual search API** for exploring public contracts in the European Union
# Why?
[TheyBuyForYou](https://theybuyforyou.eu/) collects, annotates and organizes large amounts of public procurement data in the European Union.
Let's explore its content through a set of questions!
## What types of documents are available?
```
import requests
import pandas as pd
import matplotlib.pyplot as plt
params = {
"facet.field":"source_s",
"facet":"on",
"q":"*:*"
}
url="http://librairy.linkeddata.es/data/tbfy/select"
resp = requests.get(url=url, params=params)
data = resp.json()
data_types = data['facet_counts']['facet_fields']['source_s']
data_sources = [lang for lang in data_types if isinstance(lang, str)][:-1]
frequencies = [freq for freq in data_types if isinstance(freq, int)][:-1]
#df = pd.DataFrame(frequencies, index=data_sources, columns=['x'])
#df.plot(kind='pie', subplots=True, figsize=(5, 5))
plt.pie(frequencies, labels=data_sources,
startangle=90,
explode = (0, 0.1),
autopct = '%1.2f%%')
plt.axis('equal')
plt.title('Sources')
plt.show()
```
## What are they about?
```
!pip install wordcloud
from wordcloud import WordCloud, STOPWORDS
import requests
import matplotlib.pyplot as plt
import numpy as np
def retrieve_texts(source):
params = {
"start":"0",
"rows":"100",
"fl":"txt_t",
"q":"lang_s:en & source_s:"+source
}
url="http://librairy.linkeddata.es/data/tbfy/select"
resp = requests.get(url=url, params=params)
data = resp.json()
texts = [txt['txt_t'] for txt in data['response']['docs'] ]
text = ' '.join(texts)
return text
plt.figure()
plt.subplot(1, 2, 1).set_title("Tender")
plt.plot()
plt.imshow(WordCloud(max_font_size=50, max_words=100, background_color="white",stopwords = set(STOPWORDS)).generate(retrieve_texts("tender")), interpolation="bilinear")
plt.axis("off")
plt.subplot(1, 2, 2).set_title("JRC")
plt.plot()
plt.imshow(WordCloud(max_font_size=50, max_words=100, background_color="white",stopwords = set(STOPWORDS)).generate(retrieve_texts("jrc")), interpolation="bilinear")
plt.axis("off")
plt.show()
```
## How many different languages are there?
```
import requests
import matplotlib.pyplot as plt
import numpy as np
params = {
"facet.field":"lang_s",
"facet":"on",
"q":"*:*"
}
url="http://librairy.linkeddata.es/data/tbfy/select"
resp = requests.get(url=url, params=params)
data = resp.json()
language_data = data['facet_counts']['facet_fields']['lang_s']
languages = [lang for lang in language_data if isinstance(lang, str)]
y_pos = np.arange(len(languages))
frequencies = [freq for freq in language_data if isinstance(freq, int)]
plt.bar(y_pos, frequencies, align='center', alpha=0.5)
plt.xticks(y_pos, languages)
plt.ylabel('#Docs')
plt.title('Languages')
plt.rcParams["figure.figsize"] = [200, 5]
```
## Are these texts long?
```
import requests
import matplotlib.pyplot as plt
import numpy as np
params = {
"facet.range":"size_i",
"facet.range.start":"0",
"facet.range.end":"3001",
"facet.range.gap":"100",
"facet":"on",
"q":"*:*"
}
url="http://librairy.linkeddata.es/data/tbfy/select"
resp = requests.get(url=url, params=params)
data = resp.json()
facet_data = data['facet_counts']['facet_ranges']['size_i']['counts']
sizes = [size for size in facet_data if isinstance(size, str)]
y_pos = np.arange(len(sizes))
frequencies = [freq for freq in facet_data if isinstance(freq, int)]
plt.bar(y_pos, frequencies, align='center', alpha=0.5)
plt.xticks(y_pos, sizes)
plt.ylabel('#Docs')
plt.title('#Characters')
plt.rcParams["figure.figsize"] = [20, 5]
```
# What does **SearchAPI** offer me?
**SearchAPI** finds *similar texts* regardless of the *language*.
So if we consider **organizations** and **contracts** rather than only texts, SearchAPI helps us **find organizations that have participated in contracts related to a given description**.
```
#@title Enter a description
text = 'Food consumption data are essential for assessing how exposed people are to potential risks in the food chain.' #@param {type:"string"}
print("reference text updated")
```
## Texts related across languages
We can use the [Search API](https://github.com/TBFY/search-API) service to search for tenders related to that procurement text:
```
import requests
def retrieve_documents(request):
base_url = 'https://tbfy.librairy.linkeddata.es/search-api'
resp = requests.post(base_url+'/items', json=request)
if resp.status_code != 200:
# This means something went wrong.
print('POST /items/ {}'.format(resp.status_code))
pass
return resp.json()
def search_documents(request):
i=1
for document in retrieve_documents(request):
print('[{}] {}'.format(i, document))
i+=1
request = {"source": "tender", "size": "10", "text" : text }
search_documents(request)
```
The above search was tuned to retrieve documents describing tenders (i.e `source = tender`, that are related to a given text (i.e. `text='procurement_text'`), and only the top 10 were retrieved (i.e. `size=10`).
Each search result contains the document identifier, the name and a numerical value to illustrate the level of relationship to the reference text (e.g. `'score': 2794.13`).
The search service can be currently filtered by the following parameters: `size`, `source`, `terms`, `name` and `lang`. **All of them can be freely combined.**
#### Filtering by language
The language parameter `lang` follows the [ISO 639-1 Code](https://www.iso.org/iso-639-language-codes.html). The service currently supports the following languages: English (`en`), Spanish (`es`), French(`fr`), Italian(`it`) and Portuguese(`pt`).
Let's now search for related documents written in Spanish:
```
request = {"source": "tender", "size": "10", "text" : text, "lang": "es" }
search_documents(request)
```
#### Filtering by source
Currently, both the tender descriptions retrieved from the TBFY Knowledge-Graph (i.e. `source=tender`) and the legal texts published in the JRC-Acquis dataset (i.e. `source=jrc`) are available through the web service .
If the `source` parameter is missing, no filtering is done:
```
request = {"size": "10", "text" : text }
search_documents(request)
```
We can search through tenders (i.e. `source=tender`):
```
request = {"source":"tender", "size": "10", "text" : text }
search_documents(request)
```
Or among European Union legal texts (i.e `source=jrc`):
```
request = {"source":"jrc", "size": "10", "text" : text }
search_documents(request)
```
In any case, the search can also be filtered by a particular language, for example French (i.e. `lang=fr`):
```
request = {"lang": "fr", "source":"jrc", "size": "10", "text" : text }
search_documents(request)
```
#### Filtering by Terms
You can also filter by documents containing certain words. Just add the `terms` parameter with the exact sequence of words you want the documents to contain. Please be careful with upper and lower case.
```
request = { "terms": "animal", "source":"tender", "size": "10", "text": text}
search_documents(request)
```
#### Filtering by Name
You can also filter by document name, or part of the name. Again, be careful with upper and lower case. Just use the `name` parameter
The special character `*` expresses any word in that position. So if you want its name to start with `Security` the filter should be `name=Security*`:
```
request = { "name" : "*Certification*", "source":"tender", "size": "10", "text": text}
search_documents(request)
```
## Browse the TBFY Knowledge Graph
The **ID** field returned by **SearchAPI** for each **document** corresponds to the **identifier** used by the **KG**
Once we have the list of related documents, we can directly use their `id` and `source` attributes to read the resource in the Knowledge-Graph.
```
def retrieve_tenders(request):
documents = retrieve_documents(request)
tenders = []
for document in documents:
id = document['id']
tender = requests.get('https://tbfy.librairy.linkeddata.es/kg-api/tender/'+id).json()
tenders.append(tender)
return tenders
def search_tenders(request):
tenders = retrieve_tenders(request)
i=1
for tender in tenders:
status = "unknown"
title = "unknown"
id = "unknown"
if ('status' in tender):
status = tender['status']
if ('title' in tender):
title = tender['title']
if ('id' in tender):
id = tender['id']
print('[{}] [{}] "{} ({})"'.format(i,status, title, id))
i+=1
print("Search for tenders in KG is ready")
```
#### Reading tenders from KG
We can easily read the `status` of the **tenders** in the KG from the **documents** found in the **Search-API**:
```
search_tenders({ "source":"tender","size": "10", "text": text})
```
We can **combine** the above **filters** to *improve* searches:
```
search_tenders({ "lang":"es", "source":"tender","size": "10", "text": text})
```
#### Reading suppliers of related tenders
```
def get_contracting_process(tender_id):
resp = requests.get('https://tbfy.librairy.linkeddata.es/kg-api/tender/'+tender_id+"/contractingProcess")
if resp.status_code != 200:
# This means something went wrong.
print('GET /tender/contracting_process {}'.format(resp.status_code))
pass
return resp.json()
def get_buyer(contracting_process_id):
resp = requests.get('https://tbfy.librairy.linkeddata.es/kg-api/contractingProcess/'+contracting_process_id+"/buyer")
if resp.status_code != 200:
# This means something went wrong.
print('GET /contracting_process {}'.format(resp.status_code))
pass
return resp.json()
def search_buyers(request):
tenders = retrieve_tenders(request)
i = 1
for tender in tenders:
if ('id' not in tender):
continue
contracting_processes = get_contracting_process(tender['id'])
for contracting_process in contracting_processes:
buyers = get_buyer(contracting_process['id'])
print(buyers)
for buyer in buyers:
#print(buyer)
if buyer.get('organisation'):
print('[{}] {} {}'.format(i,buyer['legalName'], buyer['organisation']))
i+=1
print("KG links are ready")
```
Explore the **KG** to identify the **buyers** that have participated in the **tenders** related to our *reference text*:
```
request = { "source":"tender","size": "20", "text": text}
search_buyers(request)
```
And they can be **filtered** through any of the filters seen above, for example by **language**:
```
request = { "lang":"es", "source":"tender","size": "10", "text": text}
search_buyers(request)
```
| github_jupyter |
```
Copyright 2021 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
# Logistic Regression on Epsilon Dataset
## Background
This is a synthetic dataset from the [PASCAL Large Scale Learning Challenge](https://www.k4all.org/project/large-scale-learning-challenge/). This challenge is concerned with the scalability and efficiency of existing ML approaches with respect to computational, memory or communication resources, e.g. resulting from a high algorithmic complexity, from the size or dimensionality of the data set, and from the trade-off between distributed resolution and communication costs.
## Source
In this example, we download the dataset from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.php).
## Goal
The goal of this notebook is to illustrate how Snap ML can accelerate training of a logistic regression model on this dataset.
## Code
```
cd ../../
CACHE_DIR='cache-dir'
import numpy as np
import time
from datasets import Epsilon
from sklearn.linear_model import LogisticRegression
from snapml import LogisticRegression as SnapLogisticRegression
from sklearn.metrics import roc_auc_score as score
dataset = Epsilon(cache_dir=CACHE_DIR)
X_train, X_test, y_train, y_test = dataset.get_train_test_split()
print("Number of examples: %d" % (X_train.shape[0]))
print("Number of features: %d" % (X_train.shape[1]))
print("Number of classes: %d" % (len(np.unique(y_train))))
model = LogisticRegression(fit_intercept=False, n_jobs=4)
t0 = time.time()
model.fit(X_train, y_train)
t_fit_sklearn = time.time()-t0
score_sklearn = score(y_test, model.predict_proba(X_test)[:,1])
print("Training time (sklearn): %6.2f seconds" % (t_fit_sklearn))
print("ROC AUC score (sklearn): %.4f" % (score_sklearn))
model = SnapLogisticRegression(fit_intercept=False, n_jobs=4)
t0 = time.time()
model.fit(X_train, y_train)
t_fit_snapml = time.time()-t0
score_snapml = score(y_test, model.predict_proba(X_test)[:,1])
print("Training time (snapml): %6.2f seconds" % (t_fit_snapml))
print("ROC AUC score (snapml): %.4f" % (score_snapml))
speed_up = t_fit_sklearn/t_fit_snapml
score_diff = (score_snapml-score_sklearn)/score_sklearn
print("Speed-up: %.1f x" % (speed_up))
print("Relative diff. in score: %.4f" % (score_diff))
```
## Disclaimer
Performance results always depend on the hardware and software environment.
Information regarding the environment that was used to run this notebook are provided below:
```
import utils
environment = utils.get_environment()
for k,v in environment.items():
print("%15s: %s" % (k, v))
```
## Record Statistics
Finally, we record the enviroment and performance statistics for analysis outside of this standalone notebook.
```
import scrapbook as sb
sb.glue("result", {
'dataset': dataset.name,
'n_examples_train': X_train.shape[0],
'n_examples_test': X_test.shape[0],
'n_features': X_train.shape[1],
'n_classes': len(np.unique(y_train)),
'model': type(model).__name__,
'score': score.__name__,
't_fit_sklearn': t_fit_sklearn,
'score_sklearn': score_sklearn,
't_fit_snapml': t_fit_snapml,
'score_snapml': score_snapml,
'score_diff': score_diff,
'speed_up': speed_up,
**environment,
})
```
| github_jupyter |
# Trax : Ungraded Lecture Notebook
In this notebook you'll get to know about the Trax framework and learn about some of its basic building blocks.
## Background
### Why Trax and not TensorFlow or PyTorch?
TensorFlow and PyTorch are both extensive frameworks that can do almost anything in deep learning. They offer a lot of flexibility, but that often means verbosity of syntax and extra time to code.
Trax is much more concise. It runs on a TensorFlow backend but allows you to train models with 1 line commands. Trax also runs end to end, allowing you to get data, model and train all with a single terse statements. This means you can focus on learning, instead of spending hours on the idiosyncrasies of big framework implementation.
### Why not Keras then?
Keras is now part of Tensorflow itself from 2.0 onwards. Also, trax is good for implementing new state of the art algorithms like Transformers, Reformers, BERT because it is actively maintained by Google Brain Team for advanced deep learning tasks. It runs smoothly on CPUs,GPUs and TPUs as well with comparatively lesser modifications in code.
### How to Code in Trax
Building models in Trax relies on 2 key concepts:- **layers** and **combinators**.
Trax layers are simple objects that process data and perform computations. They can be chained together into composite layers using Trax combinators, allowing you to build layers and models of any complexity.
### Trax, JAX, TensorFlow and Tensor2Tensor
You already know that Trax uses Tensorflow as a backend, but it also uses the JAX library to speed up computation too. You can view JAX as an enhanced and optimized version of numpy.
**Watch out for assignments which import `import trax.fastmath.numpy as np`. If you see this line, remember that when calling `np` you are really calling Trax’s version of numpy that is compatible with JAX.**
As a result of this, where you used to encounter the type `numpy.ndarray` now you will find the type `jax.interpreters.xla.DeviceArray`.
Tensor2Tensor is another name you might have heard. It started as an end to end solution much like how Trax is designed, but it grew unwieldy and complicated. So you can view Trax as the new improved version that operates much faster and simpler.
### Resources
- Trax source code can be found on Github: [Trax](https://github.com/google/trax)
- JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html)
## Installing Trax
Trax has dependencies on JAX and some libraries like JAX which are yet to be supported in [Windows](https://github.com/google/jax/blob/1bc5896ee4eab5d7bb4ec6f161d8b2abb30557be/README.md#installation) but work well in Ubuntu and MacOS. We would suggest that if you are working on Windows, try to install Trax on WSL2.
Official maintained documentation - [trax-ml](https://trax-ml.readthedocs.io/en/latest/) not to be confused with this [TraX](https://trax.readthedocs.io/en/latest/index.html)
```
#!pip install trax==1.3.1 Use this version for this notebook
```
## Imports
```
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
# Trax version 1.3.1 or better
!pip list | grep trax
```
## Layers
Layers are the core building blocks in Trax or as mentioned in the lectures, they are the base classes.
They take inputs, compute functions/custom calculations and return outputs.
You can also inspect layer properties. Let me show you some examples.
### Relu Layer
First I'll show you how to build a relu activation function as a layer. A layer like this is one of the simplest types. Notice there is no object initialization so it works just like a math function.
**Note: Activation functions are also layers in Trax, which might look odd if you have been using other frameworks for a longer time.**
```
# Layers
# Create a relu trax layer
relu = tl.Relu()
# Inspect properties
print("-- Properties --")
print("name :", relu.name)
print("expected inputs :", relu.n_in)
print("promised outputs :", relu.n_out, "\n")
# Inputs
x = np.array([-2, -1, 0, 1, 2])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = relu(x)
print("-- Outputs --")
print("y :", y)
```
### Concatenate Layer
Now I'll show you how to build a layer that takes 2 inputs. Notice the change in the expected inputs property from 1 to 2.
```
# Create a concatenate trax layer
concat = tl.Concatenate()
print("-- Properties --")
print("name :", concat.name)
print("expected inputs :", concat.n_in)
print("promised outputs :", concat.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2, "\n")
# Outputs
y = concat([x1, x2])
print("-- Outputs --")
print("y :", y)
```
## Layers are Configurable
You can change the default settings of layers. For example, you can change the expected inputs for a concatenate layer from 2 to 3 using the optional parameter `n_items`.
```
# Configure a concatenate layer
concat_3 = tl.Concatenate(n_items=3) # configure the layer's expected inputs
print("-- Properties --")
print("name :", concat_3.name)
print("expected inputs :", concat_3.n_in)
print("promised outputs :", concat_3.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
x3 = x2 * 0.99
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2)
print("x3 :", x3, "\n")
# Outputs
y = concat_3([x1, x2, x3])
print("-- Outputs --")
print("y :", y)
```
**Note: At any point,if you want to refer the function help/ look up the [documentation](https://trax-ml.readthedocs.io/en/latest/) or use help function.**
```
#help(tl.Concatenate) #Uncomment this to see the function docstring with explaination
```
## Layers can have Weights
Some layer types include mutable weights and biases that are used in computation and training. Layers of this type require initialization before use.
For example the `LayerNorm` layer calculates normalized data, that is also scaled by weights and biases. During initialization you pass the data shape and data type of the inputs, so the layer can initialize compatible arrays of weights and biases.
```
# Uncomment any of them to see information regarding the function
help(tl.LayerNorm)
# help(shapes.signature)
# Layer initialization
norm = tl.LayerNorm()
# You first must know what the input data will look like
x = np.array([0, 1, 2, 3], dtype="float")
# Use the input data signature to get shape and type for initializing weights and biases
norm.init(shapes.signature(x)) # We need to convert the input datatype from usual tuple to trax ShapeDtype
print("Normal shape:",x.shape, "Data Type:",type(x.shape))
print("Shapes Trax:",shapes.signature(x),"Data Type:",type(shapes.signature(x)))
# Inspect properties
print("-- Properties --")
print("name :", norm.name)
print("expected inputs :", norm.n_in)
print("promised outputs :", norm.n_out)
# Weights and biases
print("weights :", norm.weights[0])
print("biases :", norm.weights[1], "\n")
# Inputs
print("-- Inputs --")
print("x :", x)
# Outputs
y = norm(x)
print("-- Outputs --")
print("y :", y)
```
## Custom Layers
This is where things start getting more interesting!
You can create your own custom layers too and define custom functions for computations by using `tl.Fn`. Let me show you how.
```
help(tl.Fn)
# Define a custom layer
# In this example you will create a layer to calculate the input times 2
def TimesTwo():
layer_name = "TimesTwo" #don't forget to give your custom layer a name to identify
# Custom function for the custom layer
def func(x):
return x * 2
return tl.Fn(layer_name, func)
# Test it
times_two = TimesTwo()
# Inspect properties
print("-- Properties --")
print("name :", times_two.name)
print("expected inputs :", times_two.n_in)
print("promised outputs :", times_two.n_out, "\n")
# Inputs
x = np.array([1, 2, 3])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = times_two(x)
print("-- Outputs --")
print("y :", y)
```
## Combinators
You can combine layers to build more complex layers. Trax provides a set of objects named combinator layers to make this happen. Combinators are themselves layers, so behavior commutes.
### Serial Combinator
This is the most common and easiest to use. For example could build a simple neural network by combining layers into a single layer using the `Serial` combinator. This new layer then acts just like a single layer, so you can inspect intputs, outputs and weights. Or even combine it into another layer! Combinators can then be used as trainable models. _Try adding more layers_
**Note:As you must have guessed, if there is serial combinator, there must be a parallel combinator as well. Do try to explore about combinators and other layers from the trax documentation and look at the repo to understand how these layers are written.**
```
# help(tl.Serial)
# help(tl.Parallel)
# Serial combinator
serial = tl.Serial(
tl.LayerNorm(), # normalize input
tl.Relu(), # convert negative values to zero
times_two, # the custom layer you created above, multiplies the input recieved from above by 2
### START CODE HERE
# tl.Dense(n_units=2), # try adding more layers. eg uncomment these lines
# tl.Dense(n_units=1), # Binary classification, maybe? uncomment at your own peril
# tl.LogSoftmax() # Yes, LogSoftmax is also a layer
### END CODE HERE
)
# Initialization
x = np.array([-2, -1, 0, 1, 2]) #input
serial.init(shapes.signature(x)) #initialising serial instance
print("-- Serial Model --")
print(serial,"\n")
print("-- Properties --")
print("name :", serial.name)
print("sublayers :", serial.sublayers)
print("expected inputs :", serial.n_in)
print("promised outputs :", serial.n_out)
print("weights & biases:", serial.weights, "\n")
# Inputs
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = serial(x)
print("-- Outputs --")
print("y :", y)
```
## JAX
Just remember to lookout for which numpy you are using, the regular ol' numpy or Trax's JAX compatible numpy. Both tend to use the alias np so watch those import blocks.
**Note:There are certain things which are still not possible in fastmath.numpy which can be done in numpy so you will see in assignments we will switch between them to get our work done.**
```
# Numpy vs fastmath.numpy have different data types
# Regular ol' numpy
x_numpy = np.array([1, 2, 3])
print("good old numpy : ", type(x_numpy), "\n")
# Fastmath and jax numpy
x_jax = fastmath.numpy.array([1, 2, 3])
print("jax trax numpy : ", type(x_jax))
```
## Summary
Trax is a concise framework, built on TensorFlow, for end to end machine learning. The key building blocks are layers and combinators. This notebook is just a taste, but sets you up with some key inuitions to take forward into the rest of the course and assignments where you will build end to end models.
| github_jupyter |
## 今天的範例,帶著大家一起挖掘變數之間的關係
```
# library
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import math
import statistics
import seaborn as sns
from IPython.display import display
import sklearn
print(sklearn.__version__)
#如果只有 0.19 記得要更新至 最新版本
%matplotlib inline
```
## 產生一組資料集
```
#用字典產生一組資料
data={'sex': ['Male','Male','Male','Male','Male','Female','Female','Female','Female','Female','Male','Male','Male','Male','Male','Female','Female','Female','Female','Female'],
'insomnia':['Y','N','N','N','N','N','Y','Y','Y','N','Y','N','N','N','N','N','Y','Y','Y','N'],
'age':[23,40,5,30,1,40,16,27,43,8,23,39,5,29,1,42,13,29,41,10],
'height':[180,170,100,176,70,160,170,166,155,35,170,168,101,175,72,163,169,163,151,40],
'weight':[100,68,20,70,10,45,50,58,58,17,101,65,22,79,12,40,53,52,56,14]}
#轉成 dataframe格式
data=pd.DataFrame(data)
display(data)
print(data.info())
cat_features = []
for dtype, feature in zip(data.dtypes, data.columns):
if dtype == 'object':
cat_features.append(feature)
print(f'{len(cat_features)} category Features : {cat_features}\n')
```
## 連續 vs 連續
本範例透過 Pearson相關係數,看身高和體重相關性
* Pearson相關係數
是描述兩個連續型變數的相關性
* 語法: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html
```
# 由於 pearsonr 有兩個回傳結果,我們只需取第一個回傳值為相關係數
corr, _=stats.pearsonr(data['height'], data['weight'])
print(corr)
#代表身高和體重有高度線性相關
g = sns.regplot(x="height", y="weight", color="g",data=data)
#年齡和身高有關連
```
## 離散 vs 離散
本範例透過 Cramér's V ,看失眠的狀態和性別相關性
```
#如果沒有安裝過,先把下一行程式碼打開,先安裝套件
#!pip install researchpy
import researchpy
# https://researchpy.readthedocs.io/
```
## step1: 用交叉列連表(contingency table),來整理兩個類別型的資料
```
contTable = pd.crosstab(data['sex'], data['insomnia'])
contTable
```
## Step2:計算資料自由度 df*
```
df = min(contTable.shape[0], contTable.shape[1]) - 1
df
```
## Step3:運用 researchpy 套件,計算出 Cramer’s V 係數
```
crosstab, res = researchpy.crosstab(data['sex'], data['insomnia'], test='chi-square')
#print(res)
print("Cramer's value is",res.loc[2,'results'])
#這邊用卡方檢定獨立性,所以採用的 test 參數為卡方 "test =" argument.
# 採用的變數在這個模組中,會自己根據資料集來判斷,Cramer's Phi if it a 2x2 table, or Cramer's V is larger than 2x2.
## 寫一個副程式判斷相關性的強度
def judgment_CramerV(df,V):
if df == 1:
if V < 0.10:
qual = 'negligible'
elif V < 0.30:
qual = 'small'
elif V < 0.50:
qual = 'medium'
else:
qual = 'large'
elif df == 2:
if V < 0.07:
qual = 'negligible'
elif V < 0.21:
qual = 'small'
elif V < 0.35:
qual = 'medium'
else:
qual = 'large'
elif df == 3:
if V < 0.06:
qual = 'negligible'
elif V < 0.17:
qual = 'small'
elif V < 0.29:
qual = 'medium'
else:
qual = 'large'
elif df == 4:
if V < 0.05:
qual = 'negligible'
elif V < 0.15:
qual = 'small'
elif V < 0.25:
qual = 'medium'
else:
qual = 'large'
else:
if V < 0.05:
qual = 'negligible'
elif V < 0.13:
qual = 'small'
elif V < 0.22:
qual = 'medium'
else:
qual = 'large'
return(qual)
judgment_CramerV(df,res.loc[2,'results'])
```
### 此案例的失眠狀態和性別這兩個變數,呈現中度相關
## 搭配圖形觀察
```
g= sns.countplot(x="sex", hue="insomnia", data=data)
```
## 離散 vs 連續 Eta Squared(η2)
本範例透過 Eta Squared ,看失眠的狀態和體重相關性
```
#如果沒有安裝過,先把下一行程式碼打開,先安裝套件
#!pip install pingouin
import pingouin as pg
```
### Step1: 取出失眠和體重資料
### Step2:運用 pg.anova 計算三種變異數
```
aov = pg.anova(dv='weight', between='insomnia', data=data, detailed=True)
aov
```
### Step3:變異數換算得到 Eta Squared (𝜼^𝟐)
```
etaSq = aov.SS[0] / (aov.SS[0] + aov.SS[1])
etaSq
def judgment_etaSq(etaSq):
if etaSq < .01:
qual = 'Negligible'
elif etaSq < .06:
qual = 'Small'
elif etaSq < .14:
qual = 'Medium'
else:
qual = 'Large'
return(qual)
judgment_etaSq(etaSq)
```
### 搭配圖形來檢視
* 這邊使用小提琴圖示法
```
g = sns.catplot(x="insomnia", y="weight", hue="insomnia",
data=data, kind="violin")
```
### 結論: 體重和失眠狀態有高度相關性,有失眠狀態的體重較非失眠狀態來的大。
| github_jupyter |
```
# Use in the google colab to connect the google cloud in order to get the dataset
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# Use in google colab in order to get information about GPU
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
# Use in google colab in order to get the root path
!mkdir -p drive
!google-drive-ocamlfuse -o nonempty drive
import os
import sys
os.chdir('drive')
import nltk
from functools import lru_cache
import re
import string
import numpy as np
import pandas as pd
import torch
from collections import Counter, defaultdict
from sklearn.preprocessing import OneHotEncoder
import torch.utils.data as Data
import torch.nn.functional as F
import torch
import torch.nn as nn
from torch.autograd import Variable
import os
import glob
from sklearn.utils import shuffle
import copy
if torch.cuda.is_available():
device = torch.device("cuda")
# print(f'There are {torch.cuda.device_count()} GPU(s) available.')
print(torch.cuda.device_count())
print('Device name:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
class Modifiers:
def __init__(self):
C_INCR = 0.733
B_INCR = 0.293
B_DECR = -0.293
N_SCALAR = -0.74
self.NEGATE = \
["aint", "arent", "cannot", "cant", "couldnt", "darent", "didnt", "doesnt",
"ain't", "aren't", "can't", "couldn't", "daren't", "didn't", "doesn't",
"dont", "hadnt", "hasnt", "havent", "isnt", "mightnt", "mustnt", "neither",
"don't", "hadn't", "hasn't", "haven't", "isn't", "mightn't", "mustn't",
"neednt", "needn't", "never", "none", "nope", "nor", "not", "nothing", "nowhere",
"oughtnt", "shant", "shouldnt", "uhuh", "wasnt", "werent",
"oughtn't", "shan't", "shouldn't", "uh-uh", "wasn't", "weren't",
"without", "wont", "wouldnt", "won't", "wouldn't", "rarely", "seldom", "despite"]
# booster/dampener 'intensifiers' or 'degree adverbs'
#
self.BOOSTER_DICT = \
{"absolutely": B_INCR, "amazingly": B_INCR, "awfully": B_INCR, "completely": B_INCR, "considerably": B_INCR,
"decidedly": B_INCR, "deeply": B_INCR, "effing": B_INCR, "enormously": B_INCR,
"entirely": B_INCR, "especially": B_INCR, "exceptionally": B_INCR, "extremely": B_INCR,
"fabulously": B_INCR, "flipping": B_INCR, "flippin": B_INCR,
"fricking": B_INCR, "frickin": B_INCR, "frigging": B_INCR, "friggin": B_INCR, "fully": B_INCR,
"fucking": B_INCR,
"greatly": B_INCR, "hella": B_INCR, "highly": B_INCR, "hugely": B_INCR, "incredibly": B_INCR,
"intensely": B_INCR, "majorly": B_INCR, "more": B_INCR, "most": B_INCR, "particularly": B_INCR,
"purely": B_INCR, "quite": B_INCR, "really": B_INCR, "remarkably": B_INCR,
"so": B_INCR, "substantially": B_INCR,
"thoroughly": B_INCR, "totally": B_INCR, "tremendously": B_INCR,
"uber": B_INCR, "unbelievably": B_INCR, "unusually": B_INCR, "utterly": B_INCR,
"very": B_INCR,
"almost": B_DECR, "barely": B_DECR, "hardly": B_DECR, "just enough": B_DECR,
"kind of": B_DECR, "kinda": B_DECR, "kindof": B_DECR, "kind-of": B_DECR,
"less": B_DECR, "little": B_DECR, "marginally": B_DECR, "occasionally": B_DECR, "partly": B_DECR,
"scarcely": B_DECR, "slightly": B_DECR, "somewhat": B_DECR,
"sort of": B_DECR, "sorta": B_DECR, "sortof": B_DECR, "sort-of": B_DECR}
class Preprocessor:
def clean_text(self, text):
# '''Make text lowercase, remove text in square brackets,remove links,remove punctuation
# make text lowercase
text1 = text.lower()
# remove square brackets
text1 = re.sub('\[.*?\]', '', text1)
#remove <>
text1 = re.sub('<.*?>+', '', text1)
text1 = re.sub('\(.*?\)', ' ', text1)
text1 = re.sub('\{.*?\}', ' ', text1)
# remove links
text1 = re.sub('https?://\S+|www\.\S+', ' ', text1)
# remove punctuation
text1 = re.sub('[%s]' % re.escape(string.punctuation), '', text1)
# remove \n
# text = re.sub('\n', '', text)
# remove numbers
text = re.sub('\w*\d\w*', '', text)
return text1
def __init__(self):
self.stem = lru_cache(maxsize=10000)(nltk.stem.SnowballStemmer('english').stem)
# self.stopwords = stopwords.words('english')
self.tokenize = nltk.tokenize.TreebankWordTokenizer().tokenize
def __call__(self, text):
text1 = self.clean_text(text)
tokens = self.tokenize(text1)
# tokens = [token for token in tokens if token not in self.stopwords]
# tokens = [self.stem(token) for token in tokens]
return tokens
class DataLoader:
def __init__(self, data, batch_size, k, context_window = 1, preprocessor=Preprocessor(), enc_tokens = OneHotEncoder(), enc_modifiers = OneHotEncoder(), modifier = Modifiers(), dev = 'cpu'):
self.df = data
self.dev = dev
self.batch_size = batch_size
self.context_window = context_window
self.modifiers = modifier.BOOSTER_DICT.keys()
self.enc_modifiers = enc_modifiers
# self.vocab_frequency = 5
self.padding_length = 70
self.fold_num = 10
# self.first_sentence = 128
# self.second_sentence = self.padding_length - self.first_sentence
self.count = 0
self.apply_preprocessor(preprocessor)
enc_tokens.fit(self.vocab)
self.enc_modifiers = enc_modifiers
self.enc_modifiers.fit(self.modifier_id_to_fit)
self.split(0.9, 0.1)
train_num = len(self.train)
if train_num % self.batch_size == 0:
self.no_batch = train_num / self.batch_size
else:
self.no_batch = int(train_num / self.batch_size) + 1
trainX, trainY = self.build_training_data(enc_tokens, self.train)
testX, testY = self.build_training_data(enc_tokens, self.test)
print(f'Train Dataset Shape : {trainX.shape}')
print(f'Test Dataset Shape : {testX.shape}')
print(f'Next Modifier : {self.count}')
self.train_dataset = self.get_batch(trainX, trainY)
self.test_dataset = self.get_batch(testX, testY)
def get_batch(self, X, Y):
dataset = Data.TensorDataset(X, Y)
loader = Data.DataLoader(
dataset=dataset,
batch_size=self.batch_size,
shuffle=True,
# num_workers=2,
)
return loader
def remove_low_frequency(self, list):
new_list = []
for x in list:
if x in self.token_to_count.keys():
new_list.append(x)
return new_list
def apply_preprocessor(self, preprocessor):
self.df['tokens'] = [preprocessor(s) for s in self.df['sentence']]
self.df['tokens'] = [x[:self.padding_length] if len(x) > self.padding_length else x for x in self.df['tokens']]
# for index, row in self.df.iterrows():
# if len(row['tokens']) > self.max_length:
# row[toke]
######## Changed part for negatives ###########
self.token_to_count = Counter([x for l in self.df['tokens'] for x in l if x not in self.modifiers])
self.modifiers_to_count = Counter([x for l in self.df['tokens'] for x in l if x in self.modifiers])
############## End #################
# tmp_token_to_count = self.token_to_count.copy()
# for index, value in tmp_token_to_count.items():
# if value <= self.vocab_frequency:
# self.token_to_count.pop(index)
# self.df['tokens'] = [self.remove_low_frequency(x) for x in self.df['tokens']]
self.max_length = self.get_max_length()
print(f'Max Length : {self.max_length}')
######## Changed part for negatives ###########
self.vocab = list([[term] for term in self.token_to_count.keys()])
self.modifier_vocab = list([[term] for term in self.modifiers_to_count.keys()])
self.modifier_token_to_id = {self.modifier_vocab[i][0]: i + 1 for i in range(len(self.modifier_vocab))}
self.modifier_id_to_fit = list([[term] for term in self.modifier_token_to_id.values()])
############## End #################
print(f'Vocab Size : {len(self.vocab)}')
print(f'Modifier Size : {len(self.modifier_vocab)}')
# print(self.token_to_count)
def get_max_length(self):
max_length = 0
for index, row in self.df.iterrows():
######## Changed part for negatives ###########
token_list = [x for x in row['tokens'] if x not in self.modifiers]
############## End #################
tmp_length = len(token_list)
if tmp_length > max_length:
max_length = tmp_length
return max_length
def k_fold_partition(self, fold_num = 10):
batch_size = int(len(self.train) / fold_num) # the number of data for each fold
remain_num = len(self.train) - batch_size * fold_num # the remain data after partition
self.fold_data = []
fold_batch_list = [] # Average the remaining data to the folds
for fold in range(fold_num):
if remain_num > 0:
remain_num -= 1
fold_batch_list.append(batch_size + 1)
else:
fold_batch_list.append(batch_size)
fold_index = 0 # The starting position of each division data
for fold in range(fold_num):
fold_texts = [fold_index, fold_index + fold_batch_list[fold]]
self.fold_data.append(fold_texts)
# print(f'fold_data : {fold_batch_list[fold]}')
# print(f'fold_data type : {type(fold_batch_list[fold])}')
# print(f'index type : {type(fold_index)}')
fold_index = fold_index + fold_batch_list[fold]
def k_fold_split(self, k):
print(f'K : {k}, fold_data[k] : {self.fold_data[k]}')
validation = self.train.iloc[self.fold_data[k][0]: self.fold_data[k][1]]
train = self.train.iloc[0: self.fold_data[k][0]].append(self.train.iloc[self.fold_data[k][1]:])
train_num = len(self.train) - (self.fold_data[k][1] - self.fold_data[k][0])
if train_num % self.batch_size == 0:
self.no_batch = train_num / self.batch_size
else:
self.no_batch = int(train_num / self.batch_size) + 1
print(f' index : {self.no_batch}')
return train, validation
def split(self, train, test):
index = int(train * len(self.df))
self.train = self.df.iloc[0:index]
self.test = self.df.iloc[index:]
# if self.train_index % self.batch_size == 0:
# self.no_batch = self.train_index / self.batch_size
# else:
# self.no_batch = int(self.train_index / self.batch_size) + 1
# print(f' index : {self.no_batch}')
def build_training_data(self, enc, df):
X = []
Y = []
for index, row in df.iterrows():
# build modifiers
np_modifier = np.zeros(len(row['tokens']))
for index, value in enumerate(row['tokens']):
# label the word influenced by modifiers
if value in self.modifiers:
id = self.modifier_token_to_id[value]
np_modifier[index] = -1
for i in range(1, self.context_window + 1):
if index - i > 0:
np_modifier[index - i] = id
if row['tokens'][index - i] in self.modifiers:
self.count += 1
if index + i < len(row['tokens']):
np_modifier[index + i] = id
if row['tokens'][index + i] in self.modifiers:
self.count += 1
# np_modifier[index - self.context_window] = id
# np_modifier[index] = -1
# if (index + self.context_window) < len(row['tokens']):
# np_modifier[index + self.context_window] = id
######## Changed part for negatives ###########
# delete modifier itself
deleted_index = [i for i in range(len(np_modifier)) if np_modifier[i] == -1]
np_modifier = np.delete(np_modifier, deleted_index)
tmp = [enc.transform([[t]]).toarray()[0] for t in row['tokens'] if t not in self.modifiers]
for i in range(len(tmp)):
tmp[i] = np.append(tmp[i], np_modifier[i])
if len(tmp) < self.max_length:
pad_length = self.max_length - len(tmp)
for i in range(pad_length):
tmp.append(np.append(np.zeros(len(self.vocab)), -1))
############## End #################
X.append(tmp)
Y.append(row['label'])
X = np.array(X)
Y = np.array(Y)
X = torch.from_numpy(X).float()
Y = torch.from_numpy(Y).long()
print(f'Input Data Shape (sequence_num, sequence_len, vocab_size + 1) : {X.shape}')
print(f'Input Label Shape : {Y.shape}')
return X, Y
class RelexNet(nn.Module):
#Single step RNN.
#input_size is char_vacab_size=26,hidden_size is number of hidden neurons,output_size is number of categories
def __init__(self, vocabulary_size, modifier_size, len_seq, num_classes, enc):
super(RelexNet, self).__init__()
self.no_modifiers = np.zeros(modifier_size)
self.num_classes = num_classes
self.L = nn.Linear(vocabulary_size, num_classes)
self.dropout = nn.Dropout(p=0.5)
self.M = nn.Linear(modifier_size, 1).cuda()
# self.B = nn.Linear(len_seq, len_seq)
self.enc = enc
self.softmax = nn.Softmax(dim=1)
def handle_modifier(self, modifier_ids, batch):
# modifier_id = modifier_ids.cpu()
modifier = []
influence_flag = []
add = 0.5 * Variable(torch.ones(batch, 1))
for i in range(batch):
if modifier_ids[i] == 0:
modifier.append(self.no_modifiers)
influence_flag.append(0)
add[i] = 1
elif modifier_ids[i] == -1:
modifier.append(self.no_modifiers)
influence_flag.append(0)
add[i] = 0
else:
# print(modifier_ids[i])
modifier.append(self.enc.transform([[modifier_ids[i].cpu()]]).toarray()[0])
influence_flag.append(1)
modifier = torch.tensor(modifier).float().cuda()
influence_flag = torch.tensor(influence_flag).float()
influence_flag = influence_flag.reshape(-1, 1).cuda()
add = add.cuda()
return modifier, influence_flag, add
def forward(self, input, B_layer):
# X.shape = (batch, seq_len, vocab_size)
T = input.shape[1]
batch = input.shape[0]
predict_y = Variable(torch.zeros(batch, self.num_classes))
if B_layer is None:
B_layer = Variable(torch.zeros(batch, self.num_classes)).cuda()
for t in range(T):
tmp = input[:, t, :-1]
modifier_ids = input[:, t, -1]
modifier, influence_flag, add = self.handle_modifier(modifier_ids, batch)
L_onestep = self.L(tmp)
L_onestep = self.dropout(L_onestep)
# L_onestep = torch.sigmoid(L_onestep)
# L_onestep = MyReLU_PLUS.apply(L_onestep)
L_onestep = F.relu6(L_onestep)
modifier_score = torch.sigmoid(self.M(modifier))
modifier_score = modifier_score * influence_flag
modifier_score = modifier_score + add
# L_onestep = L_onestep + modifier_score
L_onestep = L_onestep * modifier_score
# L_onestep = L_onestep * negative_score
B_layer = torch.add(B_layer, L_onestep)
# print(B_layer)
if self.num_classes == 1:
predict_y[t] = F.sigmoid(B_layer)
else:
predict_y = self.softmax(B_layer)
return predict_y, B_layer
def train_model(dataset, model, optimizer, scheduler, num_epochs, dev):
losses = []
for epoch in range(num_epochs):
# training mode
# dataset.set_partition(dataset.train)
model.train()
total_train_loss = 0
total_train_correct = 0
count = 0
for x, y in dataset.train_dataset:
# for every batch in the training dataset perform one update step of the optimizer.
state = None
model.zero_grad()
y_h, state = model(x.to(dev), state)
loss = F.cross_entropy(y_h, y.to(dev))
optimizer.zero_grad()
# scheduler.zero_grad()
loss.backward()
optimizer.step()
# print('{} optim: {}'.format(epoch, optimizer.param_groups[0]['lr']))
scheduler.step()
# print('{} scheduler: {}'.format(epoch, scheduler.get_lr()[0]))
total_train_loss += loss.item()
total_train_correct += (y_h.argmax(-1) == y.cuda()).float().mean()
count += 1
average_train_loss = total_train_loss / count
average_train_accuracy = total_train_correct / count
print('{} optim: {}'.format(epoch + 1, optimizer.param_groups[0]['lr']))
# print('{} optim: {}'.format(epoch, optimizer.param_groups[0]['lr']))
# print('{} scheduler: {}'.format(epoch, scheduler.get_lr()[0]))
losses.append(average_train_loss)
print(f'epoch {epoch + 1} accuracies: \t train: {average_train_accuracy}\t loss: {average_train_loss}\t')
# print(f'epoch {epoch + 1} accuracies: \t train: {average_train_accuracy}\t valid: {average_valid_accuracy}\t valid loss: {average_valid_loss}\t precision: {average_precision}\t recall: {average_recall}\t')
# dataset.shuffle()
# test mode
# dataset.set_partition(dataset.test)
model.eval()
total_test_correct = 0
# total_test_tp = 0
# total_test_fp = 0
# total_test_fn = 0
count = 0
for x, y in dataset.test_dataset:
state = None
y_h, state = model(x.to(dev), state)
total_test_correct += (y_h.argmax(-1) == y.cuda()).float().mean()
# total_test_tp += float((y_h.argmax(-1) == y.cuda() & (y.cuda() == 1)).float())
# total_test_fp += float((y_h.argmax(-1) != y.cuda() & (y.cuda() == 0)).float())
# total_test_fn += float((y_h.argmax(-1) != y.cuda() & (y.cuda() == 1)).float())
count += 1
average_test_accuracy = total_test_correct / count
# average_precision = total_test_tp / (total_test_tp + total_test_fp)
# average_recall = total_test_tp / (total_test_tp + total_test_fn)
# print(f'test accuracy {average_test_accuracy} precision {average_precision} recall {average_recall}')
print(f'test accuracy {average_test_accuracy}')
return losses, (average_train_accuracy, average_test_accuracy)
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
k = 9
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
all_files = glob.glob("data/sentiment/*.csv")
# all_files = glob.glob("*.csv")
data = pd.concat((pd.read_csv(f, header=None, index_col=None) for f in all_files))
# path = os.path.join('data', 'sentiment', 'amazon_cells_labelled.csv')
# data = pd.read_csv(path)
data.columns = ['sentence', 'label']
data = data.sample(frac=1, random_state=1)
# data = shuffle(data)
dataset = DataLoader(data, k=k, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
# model = FastText(len(dataset.token_to_id)+2, num_hidden, len(dataset.class_to_id)).to(dev)
model = RelexNet(vocabulary_size=vocabulary_size, enc=dataset.enc_modifiers, modifier_size=len(dataset.modifier_vocab),len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[8 * dataset.no_batch, 14 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
print(losses)
# torch.save(model, os.path.join('./relexnet_model/classifier_M9.pth'))
```
| github_jupyter |
# DAIN Colab
*DAIN Colab, v1.7.1*
Based on the [original Colab file](https://github.com/baowenbo/DAIN/issues/44) by btahir.
Enhancements by [Styler00Dollar](https://github.com/styler00dollar), [Alpha](https://github.com/AlphaGit) and [JamesCullum](https://github.com/JamesCullum).
[Styler00Dollar's fork](https://github.com/styler00dollar/DAIN) / [Alpha's fork](https://github.com/AlphaGit/DAIN) / [Edgars' fork](https://github.com/edgarsi/DAIN).
A simple guide:
- Upload this file to your Google Colab or use [this link](https://colab.research.google.com/github/AlphaGit/DAIN/blob/master/Colab_DAIN.ipynb).
- Create a folder inside of Google Drive named "DAIN"
- Change the configuration
- Run the steps below and repeat as needed
Stuff that should be improved:
- Alpha channel will be removed automatically and won't be added back. Anything related to alpha will be converted to black.
- Adding configuration to select speed
- Auto-resume
- Copy `start_frame` - `end_frame` audio from original input to final output
# Setup
Connect your Google Drive and get a Google GPU.
```
#@title Connect to Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
print('Google Drive connected.')
#@title Check your current GPU
# If you are lucky, you get 16GB VRAM. If you are not lucky, you get less. VRAM is important. The more VRAM, the higher the maximum resolution will go.
# 16GB: Can handle 720p. 1080p will procude an out-of-memory error.
# 8GB: Can handle 480p. 720p will produce an out-of-memory error.
!nvidia-smi --query-gpu=gpu_name,driver_version,memory.total --format=csv
```
# Install dependencies.
This next step may take somewhere between 15-20 minutes. Run this only once at startup.
Look for the "Finished installing dependencies" message.
```
#@title Setup everything. This takes a while. Just wait ~20 minutes in total.
# Install old pytorch to avoid faulty output
%cd /content/
!wget -c https://repo.anaconda.com/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!bash ./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
!conda install pytorch==1.1 cudatoolkit torchvision -c pytorch -y
!conda install ipykernel -y
!pip install scipy==1.1.0
!pip install imageio
!CUDA_VISIBLE_DEVICES=0
!sudo apt-get install imagemagick imagemagick-doc
print("Finished installing dependencies.")
# Clone DAIN sources
%cd /content
!git clone -b master --depth 1 https://github.com/AlphaGit/DAIN /content/DAIN
%cd /content/DAIN
!git log -1
# Building DAIN
%cd /content/DAIN/my_package/
!./build.sh
print("Building #1 done.")
# Building DAIN PyTorch correlation package.
%cd /content/DAIN/PWCNet/correlation_package_pytorch1_0
!./build.sh
print("Building #2 done.")
# Downloading pre-trained model
%cd /content/DAIN
!mkdir model_weights
!wget -O model_weights/best.pth http://vllab1.ucmerced.edu/~wenbobao/DAIN/best.pth
```
# Configuration
```
################# Required Configurations ############################
#@markdown # Required Configuration
#@markdown Use the values in here to configure what you'd like DAIN to do.
#@markdown ## Input file
#@markdown Path (relative to the root of your Google Drive) to the input file. For instance, if you save your `example.mkv` file in your Google Drive, inside a `videos` folder, the path would be: `videos/example.mkv`. Currenly videos and gifs are supported.
INPUT_FILEPATH = "DAIN/input.mp4" #@param{type:"string"}
#@markdown ## Output file
#@markdown Output file path: path (relative to the root of your Google Drive) for the output file. It will also determine the filetype in the destination. `.mp4` is recommended for video input, `.gif` for gif inputs.
OUTPUT_FILE_PATH = "DAIN/output.mp4" #@param{type:"string"}
################# Optional configurations ############################
#@markdown # Optional Configuration
#@markdown Parameters below can be left with their defaults, but feel free to adapt them to your needs.
#@markdown ## Target FPS
#@markdown how many frames per second should the result have. This will determine how many intermediate images are interpolated.
TARGET_FPS = 60 #@param{type:"number"}
#@markdown ## Frame input directory
#@markdown A path, relative to your GDrive root, where you already have the list of frames in the format 00001.png, 00002.png, etc.
FRAME_INPUT_DIR = '/content/DAIN/input_frames' #@param{type:"string"}
#@markdown ## Frame output directory
#@markdown A path, relative to your GDrive root, where you want the generated frame.
FRAME_OUTPUT_DIR = '/content/DAIN/output_frames' #@param{type:"string"}
#@markdown ## Frame mixer
#@markdown Normally we interpolate between two adjacent frames. You can try to experiment with other mixers. Claymation mixer can give smoother outputs when animation only moves every second frame at times.
FRAME_MIXER = "normal" #@param ["normal", "claymation"] {type:"string"}
#@markdown ## Start Frame
#@markdown First frame to consider from the video when processing.
START_FRAME = 1 #@param{type:"number"}
#@markdown ## End Frame
#@markdown Last frame to consider from the video when processing. To use the whole video use `-1`.
END_FRAME = -1 #@param{type:"number"}
#@markdown ## Seamless playback
#@markdown Creates a seamless loop by using the first frame as last one as well. Set this to True this if loop is intended.
SEAMLESS = False #@param{type:"boolean"}
#@markdown ## Detect scene changes
#@markdown Detects sharp changes in video, avoiding interpolating over scene changes. Slide this all the way to 1 not use scene detection at all.
SCENE_CHANGE_THRESHOLD = 0.2 #@param {type:"slider", min:0, max:1, step:0.01}
#@markdown ## Resize hotfix
#@markdown DAIN frames are a bit "shifted / smaller" compared to original input frames. This can partly be mitigated with resizing DAIN frames to the resolution +2px and cropping the result to the original resoultion with the starting point (1,1). Without this fix, DAIN tends to make "vibrating" output and it is pretty noticible with static elements like text.
#@markdown
#@markdown This hotfix tries to make such effects less visible for a smoother video playback. I do not know what DAINAPP uses as a fix for this problem, but the original does show such behaviour with the default test images. More advanced users can change the interpolation method. The methods cv2.INTER_CUBIC and cv2.INTER_LANCZOS4 are recommended. The current default value is cv2.INTER_LANCZOS4.
RESIZE_HOTFIX = True #@param{type:"boolean"}
#@markdown ## Auto-remove PNG directory
#@markdown Auto-delete output PNG dir after ffmpeg video creation. Set this to `False` if you want to keep the PNG files.
AUTO_REMOVE = True #@param{type:"boolean"}
```
# Interpolate frames, create video
The speed of this depends on the amount of calculation that needs to be done. You can run the whole category with one click if you collapse it.
```
#@title Detecting FPS of input file.
%shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/
import os
filename = os.path.basename(INPUT_FILEPATH)
import cv2
cap = cv2.VideoCapture(f'/content/DAIN/{filename}')
fps = cap.get(cv2.CAP_PROP_FPS)
print(f"Input file has {fps} fps")
if(fps/TARGET_FPS>0.5):
print("Define a higher fps, because there is not enough time for new frames. (Old FPS)/(New FPS) should be lower than 0.5. Interpolation will fail if you try.")
#@title ffmpeg extract - Generating individual frame PNGs from the source file.
%shell rm -rf '{FRAME_INPUT_DIR}'
%shell mkdir -p '{FRAME_INPUT_DIR}'
if (END_FRAME==-1):
%shell ffmpeg -i '/content/DAIN/{filename}' -vf 'select=gte(n\,{START_FRAME}-1),setpts=PTS-STARTPTS' '{FRAME_INPUT_DIR}/%05d.png'
else:
%shell ffmpeg -i '/content/DAIN/{filename}' -vf 'select=between(n\,{START_FRAME}-1\,{END_FRAME}-1),setpts=PTS-STARTPTS' '{FRAME_INPUT_DIR}/%05d.png'
from IPython.display import clear_output
clear_output()
png_generated_count_command_result = %shell ls '{FRAME_INPUT_DIR}' | wc -l
frame_count = int(png_generated_count_command_result.output.strip())
import shutil
if SEAMLESS:
frame_count += 1
first_frame = f"{FRAME_INPUT_DIR}/00001.png"
new_last_frame = f"{FRAME_INPUT_DIR}/{str(frame_count).zfill(5)}.png"
shutil.copyfile(first_frame, new_last_frame)
print(f"{frame_count} frame PNGs generated.")
#Checking if PNGs do have alpha
import subprocess as sp
%cd {FRAME_INPUT_DIR}
channels = sp.getoutput('identify -format %[channels] 00001.png')
print (f"{channels} detected")
# Removing alpha if detected
if "a" in channels:
print("Alpha channel detected and will be removed.")
print(sp.getoutput('find . -name "*.png" -exec convert "{}" -alpha off PNG24:"{}" \;'))
!rm -f /content/DAIN/scene_frames.log
if SCENE_CHANGE_THRESHOLD != 1:
scene_frames = ! \
ffmpeg -i '{FRAME_INPUT_DIR}/%05d.png' \
-filter:v "showinfo,select='gt(scene,{SCENE_CHANGE_THRESHOLD})',showinfo" \
-f null - 2>&1 | grep Parsed_showinfo_2.*n: -B 1 | grep Parsed_showinfo_0 | sed -E 's/.*n: *([0-9]+).*/\1/' \
| tee /content/DAIN/scene_frames.log
%shell pip install ipyplot
import ipyplot
def frame_image(frame):
return f'{FRAME_INPUT_DIR}/{frame:0>5d}.png'
# Output the frame before and the frame after the first few scene changes, for manual inspection
frames = [int(i) for i in scene_frames[:100]]
for f in frames:
images = frame_image(f), frame_image(f+1)
ipyplot.plot_images(images, img_width=150, force_b64=True)
# Interpolation
%shell mkdir -p '{FRAME_OUTPUT_DIR}'
%cd /content/DAIN
!python -W ignore colab_interpolate.py --netName DAIN_slowmotion --time_step {fps/TARGET_FPS} --start_frame 1 --end_frame {frame_count} --frame_input_dir '{FRAME_INPUT_DIR}' --frame_output_dir '{FRAME_OUTPUT_DIR}' --mixer={FRAME_MIXER} --resize_hotfix {RESIZE_HOTFIX}
#@title Create output video
%cd {FRAME_OUTPUT_DIR}
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '*.png' '/content/gdrive/My Drive/{OUTPUT_FILE_PATH}'
if(AUTO_REMOVE):
!rm -rf {FRAME_OUTPUT_DIR}/*
#@title [Experimental] Create video with sound
# Only run this, if the original had sound.
%cd {FRAME_OUTPUT_DIR}
%shell ffmpeg -i '/content/DAIN/{filename}' -acodec copy output-audio.aac
%shell ffmpeg -y -r {TARGET_FPS} -f image2 -pattern_type glob -i '*.png' -i output-audio.aac -shortest '/content/gdrive/My Drive/{OUTPUT_FILE_PATH}'
if (AUTO_REMOVE):
!rm -rf {FRAME_OUTPUT_DIR}/*
!rm -rf output-audio.aac
```
| github_jupyter |
# Power and Influence: Central Positions in Networks
```
%%capture
# Housekeeping
# As explained before, it is best practice to load the modules at the start
import networkx as nx
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# This line allows visualizations within the notebook
%matplotlib inline
import matplotlib.gridspec as gridspec
import matplotlib.pylab as pl
# Make sure you download econ46_library.py from our course material and save it in the same folder as then notebooks
# this file has some functions specifically coded for the class
from supporting_material import econ46_library as el
# These modules are only to have some interactive pieces of code in the notebooks
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display
```
### Dutch High School Network
Now, let's explore the social network of a high school class from the Netherlands. It was collected by Andrea Knech for a project on friendship selection and friendship influence.
You have already seen how to create a network and look at some of its features. Here we will look at other ways to get the same information and will also expand our analysis.
```
# Load data with network information, list of edges in the network:
A = pd.read_csv('../Data/dutch_hs/dutch_class_t0.csv')
# Create a network using the adjancey matrix
G0=nx.from_pandas_edgelist(A)
# Visualize the network
el.plot_simple_graph(G0)
# List the nodes:
G0.nodes()
# Counting the number of nodes is as simple as computing the length of the node list:
len(G0.nodes())
# This time let's save the list of edges as an object and print it instead of just running the command
edge_list = G0.edges()
print(edge_list)
# Similarly we can compute the number of edges in the graph by calling the edges attribute of the G0 object
len(edge_list)
```
With 26 nodes the network could have at most 26*(26-1)/2=325 edges.
Since there are only 63 we can talk about how dense the network with respect to that fully connected or complete network.
```
density = 63/325
# Note how you can embbed the value of a variable into a line of text.
print('Density: %s' % density)
```
### Centrality Measures
#### Degree Distribution
The **degree** of a node is the number of connections the node has.
With networkx is very simple to take a look at the degree distribution.
Each node has a degree attribute and they are all stored as a Dictionary, which means you can also query the degree of specific nodes by typing `G0.degree[node id]`:
```
# Degree attribute of a networkx network.
G0.degree
G0.degree([0,1])
G0.degree[8]
# And we have a function in the class library to plot the distribution:
el.plot_centrality_dist(G0,'Degree Centrality')
```
#### Clustering
The **clustering coefficient** represents the share of a node's connections that are connected to each other and is also readily available with networkx.
```
nx.clustering(G0)
el.plot_centrality_dist(G0,'Clustering Centrality')
```
#### Betweenness Centrality
```
nx.betweenness_centrality(G0)
el.plot_centrality_dist(G0,'Betweenness Centrality')
```
Below you can interactively plot the network for different periods and observe how the different centrality measures reflect a node's position in the network.
```
display(el.dutch_nets,el.dutch_types,el.dutch_lays,el.dutch_atts,el.dutch_y)
display(el.comp_accordion,el.comp_lays,el.comp_y)
```
| github_jupyter |
```
import os
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torch.autograd import Variable
from tqdm import tqdm
from sklearn.preprocessing import OneHotEncoder
# GPU Device
gpu_id = '2'
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id)
use_cuda = torch.cuda.is_available()
print("GPU device %s:" %(gpu_id), use_cuda)
epochs = 200
# Normalize data with mean=0.5, std=1.0
mnist_transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.
transforms.Normalize(mean=(0.5,), std=(0.5,)),
])
download_root = './MNIST_DATASET'
train_dataset = MNIST(download_root, transform=mnist_transform, train=True, download=True)
valid_dataset = MNIST(download_root, transform=mnist_transform, train=False, download=True)
test_dataset = MNIST(download_root, transform=mnist_transform, train=False, download=True)
batch_size = 64
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True)
valid_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=True)
onehot = OneHotEncoder(10)
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.G_encoder = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=5, stride=2, padding=2, bias=False),
nn.InstanceNorm2d(32),
nn.ReLU(inplace=False),
nn.Conv2d(32, 64, kernel_size=5, stride=2, padding=2, bias=False),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=False),
nn.Conv2d(64, 128, kernel_size=5, stride=2, padding=2, bias=False),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=False)
)
self.G_decoder = nn.Sequential(
nn.ConvTranspose2d(128, 64, kernel_size=5, stride=2, padding=2, bias=False),
nn.InstanceNorm2d(64),
nn.LeakyReLU(0.2, inplace=False),
nn.ConvTranspose2d(64, 32, kernel_size=5, stride=2, padding=2, bias=False),
nn.InstanceNorm2d(32),
nn.LeakyReLU(0.2, inplace=False),
nn.ConvTranspose2d(32, 1, kernel_size=4, stride=2, padding=0, bias=False),
nn.InstanceNorm2d(1),
nn.Tanh()
)
self.label = nn.Sequential(
nn.Linear(10, 2048),
nn.LeakyReLU(0.2),
)
def forward(self, x, y):
encode = self.G_encoder(x)
bottleneck = encode.view(encode.size(0), -1)
label_encode = self.label(y)
bottleneck += label_encode
bottleneck = bottleneck.view(bottleneck.size(0), 128, 4, 4)
out = self.G_decoder(bottleneck)
return out
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.D = nn.Sequential(
nn.Conv2d(1, 4, kernel_size=3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(4),
nn.LeakyReLU(0.2, inplace=False),
nn.Conv2d(4, 8, kernel_size=3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(8),
nn.LeakyReLU(0.2, inplace=False),
nn.Conv2d(8, 16, kernel_size=3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(16),
nn.LeakyReLU(0.2, inplace=False),
nn.Conv2d(16, 32, kernel_size=3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.2),
nn.AdaptiveAvgPool2d(1)
)
self.label = nn.Sequential(
nn.Linear(10, 32),
nn.LeakyReLU(0.2)
)
self.sigmoid = nn.Sigmoid()
self.top = nn.Linear(32, 1)
def forward(self, x, y):
out = self.D(x)
out = out.view(out.size(0), -1)
label = self.label(y)
out += label
out = self.top(out)
out = self.sigmoid(out)
return out
D = Discriminator().cuda()
G = Generator().cuda()
criterion = nn.BCELoss()
lr = 0.0002
D_optim = torch.optim.Adam(D.parameters(), lr=lr, betas=(0.5, 0.999))
G_optim = torch.optim.Adam(G.parameters(), lr=lr, betas=(0.5, 0.999))
def train_D(D, x, real_label, fake_images, fake_labels, y):
D.zero_grad()
outputs = D(x, y)
real_loss = criterion(outputs, real_labels)
real_score = outputs
outputs = D(fake_images, y)
fake_loss = criterion(outputs, fake_labels)
fake_score = fake_loss
d_loss = real_loss + fake_loss
d_loss.backward(retain_graph=True)
D_optim.step()
return d_loss, real_score, fake_score
def train_G(G, D_outputs, real_labels, y):
G.zero_grad()
g_loss = criterion(D_outputs, real_labels)
g_loss.backward()
G_optim.step()
return g_loss
for epoch in range(epochs):
bar = tqdm(total=len(train_loader), leave=False)
D_loss, G_loss = 0.0, 0.0
for batch_idx, (images, target) in enumerate(train_loader):
images = images.cuda()
target = torch.tensor(onehot.fit_transform(target.reshape([-1, 1]), y=10).toarray(), dtype=torch.float32)
target = target.cuda()
fake_images = G(images, target)
real_labels = torch.ones(batch_size).cuda()
fake_labels = torch.zeros(batch_size).cuda()
d_loss, real_score, fake_score = train_D(D, images, real_labels, fake_images, fake_labels, target)
D_loss += d_loss
outputs = D(fake_images, target)
g_loss = train_G(G, outputs, real_labels, target)
G_loss += g_loss
bar.set_description("Epochs {:d}/{:d}, D_loss {:f}, G_loss {:f}".format(epoch, epochs, D_loss/batch_idx, G_loss/batch_idx),refresh=True)
bar.update()
bar.close()
outputs = D(images, target)
images.shape
```
| github_jupyter |
# Project Proposal
In the heat of the moment, when the enemy missiles are bearing down, a human being will utilize their learned abilities to react, and come out on top. Action games are a perfect environment for this learned ability to react to shine, and have been shown to improve players' perception, attention, and reaction time.[^1] We believe that by recreating the scenarios in which the human brain reacts, we can uncover how it learns to react. To do this, we will be using a Deep Q-Learning Network with Reinforcement Learning techniques to learn to play the 1981 Arcade Classic: Galaga.
We will be utilizing the Deep Q-Learning architecture (Minh et al., 2015), an extension of the classic Q-Learning mechanism (Watkins, 1989), which utilized a policy-iteration technique around potential rewards to develop the correct action response for a scenario. Further, we will introduced two enhancements to this architecture: Prioritized Memory Replay (Schaul et al., 2015) and Double Q-Learning (van Hasselt et al., 2016). These enhancements result in a five-standard comparison, that of: Random, DQN with Uniform Memory Replay, DQN with Prioritized Memory Replay, Double DQN with Uniform Memory Replay, and Double DQN with Prioritized memory replay.
Utilizing the OpenAI Gym, and its Retro Gym environment additions, the network will learn to identify gameplay elements using convolutional layers, learn actions through reward, and establish a state-decision formula using Q-Learning. Our techniques may vary as development continues.
Our network will be built using the tools provided by the Tensorflow and Tensorflow Keras python libraries. Our input data will be provided by the OpenAI Retro Gym, and includes full per-pixel data of the screen after an action is taken. Some of these observations are provided by the default Galaga environment, but we have the ability to add more observations as we find necessary. Our output data will consist of a vector mapping to one of five (5) actions in the Action Space, a one dimensional vector of Galaga's actions: Move Left, Move Right, Fire, and combinations of moving and firing. The OpenAI Retro Gym allows us a viewless, increased speed version of the game that can speed up training times.
To support our DQN Agent, we will utilize a first-phase Deep Convolutional network to process the by-pixel observation and produce inputs for our second-phase DQN Agent that will determine which action to take.
Galaga is a simple, straight-forward game. Its action space is small, and enemy actions are telegraphed. This makes Galaga simple to learn, but through its method of increasing enemy counts and firing rates, as well as introducing special enemies at points, Galaga is also a hard game to master.
Our input data will consist of a by-pixel image of the screen a set number of frames, τ, after the last chosen action is executed. This image will be of height 100px, width 100px, and grayscaled to exhibit only one value per pixel, resulting in an input size of 10,000. Research done in other DQN Agents shows that the use of RGB values over grayscale values does not produce greater network performance, and is longer to train due to the 3x larger input size[^2].
Each fit of the model would cover one playthrough of the game, beginning at the first level and continuing until the unit is out of lives or has completed the final level, but also may be restricted to a set number of frames to reduce training time.
Evaluation of the model would cover 100 playthroughs with a frame limit, and the average score across these would be its comparative value.
Each model will be trained and evaluated under the same hyperparameters and limits, as to allow for a direct comparison of all models.
Further comparative evaluation assessment could also be done utilizing:
- Global high score list (human players)
- Global high score list (other neural networks) !(In whatever quantity these are found available)
- Etcetera Galaga stats about performance
References:
1. Nauert, R. (2018, August 8). Action Video Games May Improve Cognitive Skills. Retrieved February 13, 2020, from https://psychcentral.com/news/2017/12/13/action-video-games-may-improve-cognitive-skills/129902.html
2. Dhoot, Tushar, Daniyal Kahn, and Ben Konyi. “Playing DOOM Using Deep Reinforcement Learning.” Accessed February 26, 2020. http://cs229.stanford.edu/proj2017/final-reports/5238810.pdf.
H. van Hasselt, A. Guez, D. Silver. Deep Reinforcement Learning with Double Q-Learning, 2016.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G.Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostro-vski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King,D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning.Nature, 518(7540):529–533, 2015.
T. Schaul, J. Quan, I. Antonoglou, D. Silver. Prioritized Experience Replay, 2016.
C. J. C. H. Watkins.Learning from delayed rewards. PhD thesis,University of Cambridge England, 1989.
| github_jupyter |
PyGSLIB
========
Trans
---------------
The GSLIb equivalent parameter file is
```
Parameters for TRANS
********************
START OF PARAMETERS:
1 \1=continuous, 0=categorical
data/true.dat \file with reference distribution
1 0 \ columns for variable and weight(0=none)
data/cluster.dat \file with original distributions
3 0 \ columns for variable and weight(0=none)
-1.0e21 1.0e21 \trimming limits
trans.out \file for transformed distributions
1 \number of realizations or "sets" to trans
50 50 1 \categorical: nx, ny, nz: size of 3-D model
2 2 0 \ wx, wy, wz: window size for tie-breaking
1000 \continuous: number to transform per "set"
0.0 75.0 \ minimum and maximum values
1 1.0 \ lower tail: option, parameter
1 75.0 \ upper tail: option, parameter
1 \honor local data? (1=yes, 0=no)
data/kt3d.out \ file with estimation variance
2 \ column number
0.5 \ control parameter ( 0.33 < w < 3.0 )
69069 \ random number seed (conditioning cat.)
```
```
#general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
```
Getting the data ready for work
---------
If the data is in GSLIB format you can use the function `gslib.read_gslib_file(filename)` to import the data into a Pandas DataFrame.
```
#get the data in gslib format into a pandas Dataframe
true= pygslib.gslib.read_gslib_file('../data/true.dat')
cluster = pygslib.gslib.read_gslib_file('../data/cluster.dat')
kt3d = pygslib.gslib.read_gslib_file('../data/kt3d.out')
print ('\t\ttrue \n', true.tail())
print ('\n\t\tcluster \n',cluster.tail())
print ('\n\t\tkt3d \n',kt3d.tail())
```
### Some Stats
Here we check we are not incorporating undefined values (see minimum and max) and that we have the same number of rows in the reference distribution (*true.dat*) and in the file with estimation variance (*kt3d.out*)
```
print ('\t\ttrue \n', true.describe())
print ('\n\t\tcluster \n',cluster.describe())
print ('\n\t\tkt3d \n',kt3d.describe())
```
## Testing trans
```
print (pygslib.gslib.__trans.trans.__doc__)
true['Weight'] =1
cluster['NO-Weight']=1
parameters_trans = {
'ivtype' : 1, # 1=continuous, 0=categorical (in this case vo may have nxyza raws?)
'vr' : true['Primary'], # reference distribution (variable )
'wt' : true['Weight'], # reference distribution (weight)
'vo' : cluster['Primary'], # calibration scatterplot (secondary data)
'wo' : cluster['NO-Weight'], # calibration scatterplot (weight data)
'nx' : 50 , #categorical: nx, ny, nz: size of 3-D model
'ny' : 50,
'nz' : 1,
'wx' : 2, # wx, wy, wz: window size for tie-breaking
'wy' : 2,
'wz' : 0,
'nxyza' : 2500, # continuous: number to transform per "set"
'zmin' : 0 , # minimum and maximum values
'zmax' : 75,
'ltpar' : 1, # lower/upper tail: option, parameter
'utpar' : 75,
'ltail' : 1,
'utail' : 1,
'ldata' : 1, # honor local data?
'kv' : kt3d['EstimationVariance'].values,
'ef' : 0.5, # control parameter ( 0.33 < w < 3.0 )
'rseed' : 69069} # random number seed (conditioning cat.)
gmedian,rvr,rcdf,ncut,zval,error = pygslib.gslib.__trans.trans(**parameters_trans)
print ('error ? ', error != 0, error)
```
## Comparing results with gslib
```
print ('gmedian', gmedian)
print ('zval')
print (pd.DataFrame({'transformed variable':zval}).head(6))
print (pd.DataFrame({'transformed variable':zval}).tail(6))
```
**Results in GSLIB **
```
0.04457
0.03996
0.06228
0.07809
0.07216
0.08818
***
18.868
23.231
6.643
4.917
1.598
2.869
```
```
# not in gslib output, not used here because this is continuous
i= np.arange(len(rcdf))+1
print (pd.DataFrame({'i':i, 'rcdf': rcdf, 'rvr': rvr}). head())
print (pd.DataFrame({'i':i, 'rcdf': rcdf, 'rvr': rvr}). tail())
```
**expected results**
By adding this into GSLIB code
```
print *, 'i', 'rcdf(i)', 'rvr(i)'
do i=1,ncut
print *, i, rcdf(i), rvr(i)
end do
```
we get this results
```
1 1.99999995E-04 9.99999978E-03
2 5.99999970E-04 9.99999978E-03
3 9.99999931E-04 9.99999978E-03
4 1.39999995E-03 1.99999996E-02
5 1.79999997E-03 1.99999996E-02
******
2496 0.998214602 43.5000000
2497 0.998614550 46.5299988
2498 0.999014616 54.3899994
2499 0.999414563 58.3199997
2500 0.999814570 102.699997
```
```
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : true['Primary'], # array('d') with bounds (nd)
'wt' : true['Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax,xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot (cl, binval, label = 'True')
parameters_probplt['va'] = cluster['Primary']
parameters_probplt['wt'] = cluster['NO-Weight']
binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax,xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
plt.plot (cl, binval, label = 'Cluster not transformed')
parameters_probplt['va'] = zval
parameters_probplt['wt'] = cluster['NO-Weight']
binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax,xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
plt.plot (cl, binval, label = 'Cluster transformed')
plt.grid(True)
ax.set_xscale('log')
#ax.set_yscale('log')
plt.legend(loc=4)
fig.show
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Pre_data = pd.read_csv("C:\\Users\\2019A00303\\Desktop\\Code\\Airbnb Project\\Data\\PreProcessingUK.csv")
Pre_data
Pre_data['Price'].plot(kind='hist', bins=100)
Pre_data['group'] = pd.cut(x=Pre_data['Price'],
bins=[0, 50, 100, 150, 200, 1000],
labels=['group_1','group_2','group_3','group_4','group_5'])
Pre_data.head()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(Pre_data, Pre_data["group"]):
train = Pre_data.loc[train_index]
test = Pre_data.loc[test_index]
train['group'].value_counts() / len(train)
test['group'].value_counts() / len(test)
train.drop('group', axis=1, inplace=True)
train.head()
test.drop(['Unnamed: 0','group', 'Host Since', 'Country', 'Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
test.head()
train_y = train[['Price']]
train_y.head()
train.drop(['Unnamed: 0', 'Price', 'Host Since', 'Country','Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
train_X = train
train_X.head()
test_y= test[['Price']]
test_y.head()
test.drop('Price', axis=1, inplace=True)
test_X = test
test_X.head()
# from sklearn.linear_model import LinearRegression
# l_reg = LinearRegression()
# l_reg.fit(train_X, train_y)
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
import numpy as np
# predictions = l_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = l_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.tree import DecisionTreeRegressor
# d_reg = DecisionTreeRegressor()
# d_reg.fit(train_X, train_y)
# predictions = d_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = d_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.svm import SVR
# svr = SVR()
# svr.fit(train_X, train_y)
# predictions = svr.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = svr.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neighbors import KNeighborsRegressor
# knn = KNeighborsRegressor()
# knn.fit(train_X, train_y)
# predictions = knn.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = knn.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neural_network import MLPRegressor
# ann = MLPRegressor()
# ann.fit(train_X, train_y)
# predictions = ann.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = ann.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
from sklearn.ensemble import RandomForestRegressor
r_reg = RandomForestRegressor()
r_reg.fit(train_X, train_y)
features = train_X.columns
importances = r_reg.feature_importances_
indices = np.argsort(importances)
plt.title('UK Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='g', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
# predictions = r_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.model_selection import GridSearchCV
# param = {'n_estimators' : [800,900,1000], 'max_features' : ['sqrt','auto','log2'], 'max_depth' : [8,9,10],
# 'min_samples_split': [2,3,4]}
# r_reg = RandomForestRegressor(random_state=42)
# search = GridSearchCV(r_reg, param, cv=5,
# scoring='neg_mean_absolute_error')
# search.fit(train_X, train_y['Price'].ravel())
# from sklearn.ensemble import RandomForestRegressor
# r_reg = RandomForestRegressor(bootstrap=True,
# min_samples_split=2,
# criterion='mse',
# max_depth=None,
# max_features='auto',
# n_estimators=1000,
# random_state=42,
# )
# r_reg.fit(train_X, train_y['Price'].ravel())
# predictions = r_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
```
| github_jupyter |
# k-Nearest Neighbors implementation
- Doesn't use any library to perform KNN.
- Uses scikit-learn library for calculating various metrics and confusion matrix.
It is possible to provide file name, k value and training-test data split ratio as arguments such as the following:
python knn.py data/iris.csv 5 0.67
It is tested with the following example data sets:
- [arrhythmia](./data/arrhythmia.csv): missed values replaced by -1 (https://archive.ics.uci.edu/ml/datasets/Arrhythmia)
- [banknote](./data/banknote.csv): nothing changed, converted to CSV (https://archive.ics.uci.edu/ml/datasets/banknote+authentication)
- [forestfires](./data/forestfires.csv): categorical values (mon, day) are converted to numeric values, all values larger than 0 are converted to 1 in burned area column (https://archive.ics.uci.edu/ml/datasets/Forest+Fires)
- [iris](./data/iris.csv): categorical result value are converted to numeric values (https://archive.ics.uci.edu/ml/datasets/Iris)
- [lung-cancer](./data/lung-cancer.csv): moved target values to the last column, missed values replaced by -1 (https://archive.ics.uci.edu/ml/datasets/Lung+Cancer)
- [phishing-websites](./data/phishing-websites.csv): nothing changed, converted to CSV without header (https://archive.ics.uci.edu/ml/datasets/Phishing+Websites)
The main source for the code is the following tutorial: [Develop k-Nearest Neighbors in Python From Scratch](http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/)
```
from operator import itemgetter
from utility import display, euclidean, load_dataset, split_dataset
```
## Locate the most similar neighbors
```
def get_neighbors(training, test, k):
distances = {}
for x in range(len(training)):
dist = euclidean(test, training[x])
distances[x] = dist
distances = sorted(distances.items(), key=itemgetter(1))
neighbors = []
for _ in range(k):
neighbors.append(distances.pop()[0])
return neighbors
```
## Make a classification prediction with neighbors
```
def predict(neighbors, target):
class_votes = {}
for x in neighbors:
response = target[x]
if response in class_votes:
class_votes[response] += 1
else:
class_votes[response] = 1
sorted_votes = sorted(class_votes.items(),
key=itemgetter(1), reverse=True)
return sorted_votes[0][0]
```
## Load data
```
dataset, target = load_dataset("data/forestfires.csv")
```
## Split data
```
train_x, train_y, test_x, test_y = split_dataset(dataset, target, 0.8)
print("Training set size: %d" % (len(train_x)))
print("Testing set size: %d" % (len(test_x)))
```
## Predict
```
predictions = []
actual = []
for x in range(len(test_x)):
neighbors = get_neighbors(train_x, test_x[x], 5)
result = predict(neighbors, train_y)
predictions.append(result)
actual.append(test_y[x])
```
## Calculate and display scores
```
display(actual, predictions)
```
| github_jupyter |
# AerisWeather Python SDK
----
The AerisWeather Python SDK is a coding toolkit created to streamline integrating data from the [AerisWeather API](https://www.aerisweather.com/support/docs/api/) into Python applications.
In other words, the goal of the SDK is to make it easier to get weather data into your Python app. While working towards that goal, we also tried to make the SDK as "Pythonic" as possible, embracing some of our favorite core tenets of the [Zen of Python (PEP20)](https://www.python.org/dev/peps/pep-0020/#id3):
- Simple is better than complex.
- Readability counts.
- If the implementation is hard to explain, it's a bad idea.
Thanks for checking out the AerisWeather Python SDK. We hope you find it useful, and enjoy using it as much as we enjoyed creating it!
*The AerisWeather Python Team*
----
### Downloads
The full SDK containing the library, docs and demos is available for download from the project's GitHub page, or from the [Toolkits section](https://www.aerisweather.com/support/docs/toolkits/) of the AerisWeather site.
If you want just the library, that's available all by itself on [PyPi](https://pypi.org/). Most likely though, you'll just want to install it via pip (see [Installation] in the Getting Started section below).
### Requirements
- Python v3.6 or higher.
- An active AerisWeather API subscription.
*Don't have an active AerisWeather API client account?
Accessing the API data requires an active AerisWeather API subscription, and registration of your application or namespace. You can sign up for a free developer account at the [AerisWeather Sign Up page](https://www.aerisweather.com/signup/) to get your client ID and secret.*
### License
This SDK is provided free of charge for AerisWeather's customers, and is available for use under the [MIT open source](https://opensource.org/licenses/MIT) license.
### SDK Contents
- Core Library/Package
The AerisWeather Python SDK includes a core Python library that provides simplified fetching and parsing of data from the API, written in Python v3.6.
- Unit Tests
The SDK incudes a full set of unit tests, which are a great source of sample code for accessing the API endpoints. Check them out on [GitHub](https://github.com/aerisweather/python_sdk/tree/master/tests).
- Docs
A full set of HTML code documentation is provided, outlining class and method structure. More documentation can be found at [aerisweather.com](https://www.aerisweather.com/docs/python/Aeris/index.html), and on the project's [GitHub page](https://github.com/aerisweather/python_sdk/tree/master/docs).
- Demo Project File
AerisWeatherPythonDemo is a Python file containing examples demonstrating a few ways to use the aerisweather package to get data from the AerisWeather API. You can find it on [GitHub](https://github.com/aerisweather/python_sdk/tree/master/AerisWeatherPythonDemo).
- Jupyter Notebook
Also in the AerisWeatherPythonDemo folder is a Jupyter Notebook. This is another source of example code. The notebook allows you to experiment with various methods of accessing the API without having to run the demo project. Here it is on [GitHub](https://github.com/aerisweather/python_sdk/tree/master/AerisWeatherPythonDemo).
- Readme
- License file
- setup.py
## Getting Started
----
### Download the SDK
If you haven't already downloaded the SDK, see the [Downloads](#Downloads) section above and get the SDK and all it's contents.
### Python Demo File or Jupyter Notebook
Under the main SDK folder, you'll see the AerisWeatherPythonDemo folder. This folder contains:
- aeris_demo.py
- aeris_demo_notebook.ipynb
The first, aeris_demo.py is a simple Python file. The other is a Jupyter notebook. Both of these resources contain the same sample code, so you can choose to use either or both to learn more about how the aerisweather library works.
Be aware that whichever path you choose to follow, you will need to [install the aerisweather package](#Install-the-aerisweather-Package) to the Python environment that supports that resource. We used PyCharm during the creation of the SDK, and used it to run the aeris_demo.py project. For the notebook, we configured the kernel to point to PyCharm's Python venv.
### Install the aerisweather Package
Next, let's get the aerisweather package installed to your Python environment.
To use pip for the installation:
pip install -U --index-url https://pypi.org/simple/ aerisweather==#.#.#
#### Using PyCharm:
If you're using PyCharm as your IDE, go to Settings | Project Interpreter and click the green "+" in the upper right. Search for aerisweather and choose to install the package to your Python environment.
*Note: If you're also initializing the Jupyter Kernel from PyCharm, this step will cover that too.*
## The Basics
----
In this section we'll talk about how to get set up to do basic data requests from the API, using the AerisWeather Python library. The topics covered on this page are:
- [The AerisWeather Class](#The-AerisWeather-Class)
- [Set The AerisWeather API Credentials](#Set-The-AerisWeather-API-Credentials)
- [Endpoints](#Endpoints)
- [Locations](#Locations)
- [Optimizing API Requests](#Optimizing-API-Requests)
- [API Responses](#API-Responses)
- [Examples](#Examples)
### The AerisWeather Class
The first import line we see below is for our main class, aerisweather. Once we have successfully created an aerisweather object, we'll do most of the heavy lifting with it.
from aerisweather.aerisweather import AerisWeather
from aerisweather.requests.ParameterType import ParameterType
from aerisweather.requests.RequestLocation import RequestLocation
from aerisweather.requests.RequestAction import RequestAction
from aerisweather.requests.RequestFilter import RequestFilter
from keys import client_id, client_secret, app_id
The last import is also important to note, and if you've completed setting up your AerisWeather API credentials, it should look familiar. We need this so we can pass our API credentials along with the requests for data.
In the code snippet below we're creating the aerisweather object that we'll use to make requests to the API. Once the aerisweather object is created, we can continue to use it throughout the rest of the code.
aeris = AerisWeather(client_id=client_id, client_secret=client_secret, app_id=app_id)
### Set The AerisWeather API Credentials
Notice that in both the aeris_demo.py file and the aeris_demo_notebook Jupyter notebook file, the import section has a reference to a "keys" module.
from keys import client_id, client_secret, app_id
This import statement also references "client_id" and "client_secret". These are the AerisWeather API credentials mentioned previously in the [Requirements](#Requirements) section.
The final reference in the import statement is the application id, which should be the namespace or domain of the application from which you will be accessing the API. For details on application id and namespace restrictions, see [Namespace Access Restrictions](https://www.aerisweather.com/support/docs/api/getting-started/authentication/).
*If you don't have an active AerisWeather API account, check out the [Requirements](#Requirements) section and get signed up before continuing.*
In the keys.py file(s), replace the placeholders with your AerisWeather API client id and secret, and the appropriate namespace or domain for your Python application.
app_id = "com.aerisweather.pythonsdkdemo"
client_id = ""
client_secret = ""
#### Almost There...
Ok, now that we have our aerisweather object and have our credentials configured, we only need two more things (at a minimum) for every request to the database:
- an endpoint (what)
- a location (where)
### Endpoints
The Aeris API supports data requests for many kinds of weather related data, each defined by an endpoint. Endpoints refer to the types of data to request, such as a place, observation, forecast or advisory, and will be the basis for any request made to the weather API. You can check out the full list of Aeris API endpoints on the [AerisWeather endpoints page](https://www.aerisweather.com/support/docs/api/reference/endpoints).
The Python SDK has implemented type hints to allow code completion in IDEs that support it. Code completion is available for all fully implemented endpoints. For new or custom endpoints that are not yet specifically implemented in the SDK, check out the section on [Custom Endpoint requests](#Custom-Endpoints).
For the examples that follow, we are using two of the most popular endpoints of the Aeris API, Observations and Forecasts.
### Locations
Other than the type of data (endpoint) itself, the next most important piece of info we need is the location, or place, to which that data pertains. You will find that in the API, location and place are sometimes used interchangeably.* For the purpose of the AerisWeather Python SDK, we will use the term "location".
*Even in the requests for the "Places" endpoint - but we'll get to that later...
To be able to send a valid location to the API, we need either:
- latitude **and** longitude
or
- city **and** state
or
- zip/postal code
In the AerisWeather Python library, there a couple of ways to we can pass that location info to the API within a request.
- The RequestLocation class
- The "p=" Parameter
#### The RequestLocation
The RequestLocation class is designed to be flexible enough to handle whatever location information you have, and give the most accurate location info to the API.
The Python lib will take the most accurate source of data and pass that to the API. For instance, let's say you create a RequestLocation object, and set the latitude, longitude, city and state like this:
loc = RequestLocation(city="minneapolis", state="mn", latitude="", longitude="")
Then we can use the RequestLocation object in an endpoint request, like this:
obs_list = aeris.observations(location=loc)
When the lib builds the request for the API, it will use the lat and lon, since that's the most accurate of the info given.
If you'd like to dig into the details of the RequestLocation class, you can check out the [code docs](https://www.aerisweather.com/docs/python/Aeris/classaerisweather_1_1requests_1_1_request_location_1_1_request_location.html).
#### The "p" Parameter
In some cases, you may want to use request Actions like "closest" to query the API for data that is closest to the requested place or location.
If we use the p parameter, it takes the place of RequestLocation we discussed earlier. For example, we can create a request like this:
obs_list = aeris.observations(action=RequestAction.OBSERVATIONS.CLOSEST, params={ParameterType.OBSERVATIONS.P: "minnneapolis,mn")
### API Responses
Each request to the API, if successful, returns a list of endpoint appropriate response objects. Even if there is only one response result, it will be returned in a list.
For example, the Observations request below returns a single response result. The response is returned as a list with one [ObservationsResponse](https://www.aerisweather.com/docs/python/Aeris/classaerisweather_1_1responses_1_1_observations_response_1_1_observations_response.html) object in it.
obs_list = aeris.observations(location=loc)
for obs in obs_list:
# get the AerisPlace responses object
place = obs.place
### Examples
----
In the following examples, we'll start with the basics of using the AerisWeather Python library. If you know what you're looking for, here's the list of basic examples:
- [Simple API Request, No Options](#Simple-API-Request,-No-Options)
- [API Request with Action, Filter and Parameters](#API-Request-with-Action,-Filter-and-Parameters)
- [Forecasts Example with Fields](#Forecasts-Example)
```
from aerisweather.aerisweather import AerisWeather
from aerisweather.requests.ParameterType import ParameterType
from aerisweather.requests.RequestLocation import RequestLocation
from aerisweather.requests.RequestAction import RequestAction
from aerisweather.requests.RequestFilter import RequestFilter
from keys import client_id, client_secret, app_id
```
#### Simple API Request, No Options
```
# instantiate our aerisweather object
aeris = AerisWeather(client_id=client_id, client_secret=client_secret, app_id=app_id)
# create a RequestLocation object to be used with any endpoint requests
loc = RequestLocation(city="minneapolis", state="mn")
# create a simple observations request with no options
obs_list = aeris.observations(location=loc)
for obs in obs_list:
# get the AerisPlace responses object
place = obs.place
# get the observations data object
ob = obs.ob
# get some observations data
tempF = ob.tempF
weather = ob.weather
print()
print("Observations Example 1:")
print("The current weather for " + place.name + ", " + place.state + ":")
print("Conditions are currently " + weather + " with a temp of " + str(tempF) + "°F")
```
#### Forecasts Example
In the following example, we'll see how to pull some data from the [Forecasts endpoint](https://www.aerisweather.com/support/docs/api/reference/endpoints/forecasts/).
Notice that in this example we are again using a [RequestLocation](#The-RequestLocation) object, this time with a postal code.
```
# Let's create a new RequestLocation, this time using a postal code
loc = RequestLocation(postal_code="55124")
# we'll limit the fields returned by the API too, to just the ones we need for our example
forecast_list = aeris.forecasts(location=loc,
params={ParameterType.FORECASTS.FIELDS:
"periods.isDay, periods.maxTempF, periods.minTempF, periods.weather"})
for forecast in forecast_list:
# check to see if this is a day or night forecast
if forecast.periods[0].isDay:
day = forecast.periods[0]
night = forecast.periods[1]
else:
day = forecast.periods[1]
night = forecast.periods[0]
print()
print("Forecast Example:")
print("Today expect " + day.weather + " with a high temp of " + str(day.maxTempF) + "°")
print("Tonight will be " + night.weather + " with a low temp of " + str(night.minTempF) + "°")
```
## Advanced Topics
Now that we've seen how to set up standard requests and work with the API response, let's take a look at a few of the more advanced bits of the Aeris Python library. In this section we'll talk about:
- [Optimizing API Requests](#Optimizing-API-Requests)
- [Optimizing API Requests Example](#Optimizing-API-Requests-Example)
- [Custom Endpoints](#Custom-Endpoints)
- [Custom Endpoints Example](#Custom-Endpoints-Example)
- [Batch Requests](#Batch-Requests)
- [Batch Request Example](#Batch-Request-Example)
### Optimizing API Requests
The Aeris API offers several request properties designed to improve response time and reduce the amount of data returned. Each Aeris API endpoint has a unique set of request properties for its specific data. To see the various request properties for a specific endpoint, check out the [corresponding endpoint docs](https://www.aerisweather.com/support/docs/api/reference/endpoints).
In the Aeris Python library, we have mapped each of these properties to their own class, and reduced the available options to only those that apply to each endpoint.
The endpoint request properties are broken down into the following categories:
- Request Action
- Request Filter
- Request Parameter
- Request Field
- Request Query
- Request Sort
When using the Aeris Python SDK, you can add these properties to your requests through the optional parameters passed to the request methods. For example:
obs_list = aeris.observations(action=RequestAction.OBSERVATIONS.CLOSEST,
filter_=[RequestFilter.OBSERVATIONS.ALL_STATIONS],
params={ParameterType.OBSERVATIONS.P: "minneapolis,mn",
ParameterType.OBSERVATIONS.FIELDS: "place, ob.tempF,ob.weather"})
The above Observations request is using an Action property of "Closest", a Filter property of "All Stations" and two Parameter options "p" and "Fields". This will provide a very specific set of Observations data for the location.
You can check out the [code examples](#Examples) provided to see different ways of using these properties in your requests.
#### Optimizing API Requests Example
```
# Make the API request and get a list of ObservationResponse objects from the response
# (Note: we don't need a RequestLocation object, since we're using Closest with the "p" parameter.)
obs_list = aeris.observations(action=RequestAction.OBSERVATIONS.CLOSEST,
filter_=[RequestFilter.OBSERVATIONS.ALL_STATIONS],
params={ParameterType.OBSERVATIONS.P: "minneapolis,mn",
ParameterType.OBSERVATIONS.FIELDS: "place, ob.tempF,ob.weather"})
for obs in obs_list:
place = obs.place
ob = obs.ob
tempF = ob.tempF
weather = ob.weather
print()
print("Observations Example 2:")
print("The current weather for " + place.name + ", " + place.state + ":")
print("Conditions are currently " + weather + " with a temp of " + str(tempF) + "°F")
```
### Custom Endpoints
As the Aeris API continues to grow, new endpoints are added and new properties are added to existing ones. In some cases, the Aeris API may be updated ahead of the Python SDK may not yet support a new API endpoint. Or an endpoint may have been updated with new options. For cases like these, the Aeris Python library provides a special endpoint type named CUSTOM, and the CustomResponse class. In this section we'll show some examples of using these to make custom requests to the API.
First we'll request data for a valid endpoint, but one that isn't yet supported by the SDK. We do custom requests a little different, in that we need to define the EndpointType and set up an Endpoint object. We then pass that Endpoint object to the aerisweather.request() method and let it generate the request based on the endpoint object's properties.
EndpointType.custom = "stormreports"
endpoint = Endpoint(EndpointType.CUSTOM, location=RequestLocation(postal_code="54660"))
As you see above, we can use the endpoint object to handle all of the optional properties as well. Once we have all of our request properties added to the Endpoint object, we can pass that to the request() method.
response_list = awx.request(endpt)
The request method always returns a list of response objects. The type of response objects returns depends on the endpoint specified in the request, so we always get back a response appropriate to the endpoint we're getting data from. In this case, we will get a list of CustomResponse objects, since we requested data from an unknown (custom) endpoint type.
In the example below, we're requesting data from the Aeris API [StormReports endpoint](https://www.aerisweather.com/support/docs/api/reference/endpoints/stormreports/). We know this to be a valid endpoint, it's just not fully implemented in the SDK at the time of this writing.
*NOTE: If there are no storm reports for the location, try changing the postal code to a location that has had some storm activity recently.*
#### Custom Endpoints Example
```
# we need a couple of additional imports
from aerisweather.responses.CustomResponse import CustomResponse
from aerisweather.requests.Endpoint import Endpoint, EndpointType
# let's get data from a valid endpoint, but one that's not in our Endpoint Enum. A use case for this might be
# where we want to test a beta or pre-release endpoint
aeriswx = AerisWeather(app_id=app_id,
client_id=client_id,
client_secret=client_secret)
# define the custom endpoint type
EndpointType.custom = "stormreports"
# create an endpoint object, specifying CUSTOM as it's type, and let's go ahead and give the endpoint our location too
endpoint = Endpoint(EndpointType.CUSTOM, location=RequestLocation(postal_code="54660"))
# this will be a list of CustomResponse objects
response_list = aeriswx.request(endpoint)
response = response_list[0]
# the response should have storm report data, so let's try to pull the report type
print("The storm report type is: " + response.report.type)
```
Notice in the above example for storm reports, that although we don't have a specific response class type (like ForecastsResponse), dot notation works for any valid data within our CustomResponse object.
Let's do another example, this time using an endpoint that is currently implemented in the SDK. This is useful in cases where the endpoint has been added, but maybe a new field has been added since the last SDK update. Or, maybe we just forgot to add support a field or parameter... it happens... :-)
In this example we'll request data from the Forecasts endpoint, using the Custom endpoint type and CustomResponse object.
```
# You can also use the custom endpoint type to request data from a known valid endpoint, for cases
# where new API data fields have not yet been added to an endpoint's response class.
EndpointType.custom = "forecasts"
forecasts_list = aeriswx.request(endpoint=Endpoint(endpoint_type=EndpointType.CUSTOM,
location=RequestLocation(postal_code="55344")))
forecast = forecasts_list[0]
period = forecast.periods[0] # type: ForecastPeriod
print ("Expect the weather to be " + period.weather)
```
### Batch Requests
One of the most common Aeris API requirements is to obtain multiple pieces of information for a location. We can do this with multiple API requests for each data type, but a batch request is a little nicer because we can query multiple endpoints with a single request. For more details on the API side of a batch request, check out the [Batch Request page](https://www.aerisweather.com/support/docs/api/getting-started/batch/).
#### The Request
Batch requests in the Aeris Python SDK are somewhat similar to the custom endpoint request, which we learned about in the last section. To make a batch request, we define an Endpoint object for each request we want to make, then send a list containing all of the endpoint objects to the aerisweather.batch_request() method.
#### The Response
Just like all other requests, the batch_request method returns a list of responses. In this case however, the list will contain the responses from each endpoint, in the order that they were requested.
*Note: Because some endpoints can return more than one response per request, we need to check each response type in the list to determine how to handle it. Check the example below to see how we handled it.*
#### Global Properties
Along with the list of endpoints, we can also pass global properties to the batch_request() method. These properties will then be applied to each of the included endpoints, unless they override that particular property. For example, if we set up two endpoints like this:
endpoint1 = Endpoint(endpoint_type=EndpointType.OBSERVATIONS,
location=RequestLocation(postal_code="54660"))
endpoint2 = Endpoint(endpoint_type=EndpointType.FORECASTS)
add these to a list
endpoint_list = [endpoint1, endpoint2]
and then pass these to the batch_request method
response_list = awx.batch_request(endpoints=endpoints,
global_location=RequestLocation(postal_code="55124"))
the result will be that endpoint1 will request observations data for 54660, while endpoint2 will request forecast for 55124.
Ok, let's take a look at a live example:
#### Batch Requests Example
```
# add some imports
from aerisweather.responses.ForecastsResponse import ForecastsResponse
from aerisweather.responses.ObservationsResponse import ObservationsResponse
from aerisweather.responses.ObservationsSummaryResponse import ObservationsSummaryResponse
# get our aerisweather instance
awx = AerisWeather(app_id=app_id,
client_id=client_id,
client_secret=client_secret)
# define the first endpoint for our batch request
endpoint = Endpoint(endpoint_type=EndpointType.OBSERVATIONS,
action=RequestAction.OBSERVATIONS.CLOSEST,
params={ParameterType.OBSERVATIONS.P: "54601"})
# define the second endpoint of our batch request
endpoint2 = Endpoint(endpoint_type=EndpointType.FORECASTS,
params={ParameterType.FORECASTS.LIMIT: "1"})
# and then the third
endpoint3 = Endpoint(endpoint_type=EndpointType.OBSERVATIONS_SUMMARY)
# add all three endpoints to a list
endpoints = [endpoint, endpoint2, endpoint3]
# pass the list of endpoints to the batch_request method, along with some global properties that will apply to all of
# the endpoints (unless they override it)
response_list = awx.batch_request(endpoints=endpoints,
global_location=RequestLocation(postal_code="54660"))
for resp in response_list:
if type(resp) is ObservationsResponse:
# Observations
print ("Observations Data")
obs = resp
loc = obs.loc
print("loc.latitude = " + str(loc.lat))
place = obs.place
print("place.state = " + place.state)
print("")
elif type(resp) is ForecastsResponse:
# Forecasts
print("Forecasts Data")
forecast = resp
print("forecast.loc.lat = " + str(forecast.loc.lat))
print("forecast.interval = " + forecast.interval)
period = forecast.periods[0]
print("period.weather = " + period.weather)
print("")
elif type(resp) is ObservationsSummaryResponse:
# ObservationsSummary
print("Observations Summary Data")
obs_sum = resp
place = obs_sum.place
print("place.state = " + place.state)
periods = obs_sum.periods
temp = periods[0].temp
print("temp.avgF = " + str(temp.avgF))
print("")
```
| github_jupyter |
SOP017 - Add app-deploy AD group
================================
Description
-----------
If the Big Data Cluster was installed without an Active Directory group,
you can add one post install using this notebook.
### Steps
### Parameters
```
user_or_group_name = "<INSERT USER/GROUP NAME>"
realm = "<INSERT REALM>" # Upper case
sid = "" # To find the SID of the user or the group being added, you can use Get-ADUser or Get-ADGroup PowerShell commands.
role = "appReader"
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Create helper function to run `sqlcmd` against the controller database
```
import pandas
from io import StringIO
pandas.set_option('display.max_colwidth', -1)
name = 'controldb-0'
container = 'mssql-server'
def run_sqlcmd(query):
command=f"""export SQLCMDPASSWORD=$(cat /var/run/secrets/credentials/mssql-sa-password/password); /opt/mssql-tools/bin/sqlcmd -b -S . -U sa -Q "SET NOCOUNT ON; {query}" -d controller -s"^" -W > /tmp/out.csv; sed -i 2d /tmp/out.csv; cat /tmp/out.csv"""
output=stream(api.connect_get_namespaced_pod_exec, name, namespace, command=['/bin/sh', '-c', command], container=container, stderr=True, stdout=True)
print(output)
print("Function defined")
```
### Insert user or group into the controller database roles table
```
run_sqlcmd(f"INSERT INTO [controller].[auth].[roles] VALUES (N'{user_or_group_name}@{realm}', '{role}')")
```
### Insert user or group into the controller database active\_directory\_principals tables
```
run_sqlcmd(f"INSERT INTO [controller].[auth].[active_directory_principals] VALUES (N'{user_or_group_name}@{realm}', N'{sid}')")
print('Notebook execution complete.')
```
| github_jupyter |
```
import nengo
import numpy as np
import matplotlib.pyplot as plt
import gym
def softmax(x):
return np.exp(x)/sum(np.exp(x))
# master class that performs environment interaction and learning
class Master():
def __init__(self,
env,
dt,
stepSize=1):
# gym
self.env = env
if type(env.action_space) == gym.spaces.discrete.Discrete:
self.action_dim = env.action_space.n
self.discrete_actions = True
else:
self.action_dim = env.action_space.shape[0]
self.discrete_actions = False
self.state_dim = env.observation_space.shape[0]
self.stepsize = stepSize
self.dt = dt
self.state = env.reset()
self.reward = 0
self.done = False
self.reward_history = []
self.totalReward = 0
self.current_action = 0
self.output = np.zeros(self.action_dim)
def step(self,t,x):
if int(t / self.dt) % self.stepsize != 0:
return
if self.discrete_actions:
# action = np.random.choice(self.action_dim, p=softmax(x))
self.current_action = np.argmax(x)
else:
self.current_action = x
# self.env.render()
# print(f'STEP... x: {x}, action: {action}')
self.state, self.reward, self.done, _ = self.env.step(self.current_action)
self.totalReward += self.reward
if self.done:
# print('done')
self.reward = -2
self.totalReward += self.reward
self.reward_history.append(self.totalReward)
self.state = self.env.reset()
self.totalReward = 0
def calculate_Q(self,t,x):
if int(t*1000) % self.stepsize == 1:
qmax = x[np.argmax(x)]
self.output = x
self.output[self.current_action] = 0.9*qmax + self.reward
return self.output
dt = 1e-3
n_actor = 180
n_place = 100
place_radius = 10
actor_radius = 2
stepSize = 5
actor_lr = 0.05
tau = 0.01
fast_tau = 0
slow_tau = 0.01
env = gym.make('CartPole-v0')
# env = gym.make('MountainCarContinuous-v0')
# env = gym.make('Pendulum-v0')
# env = gym.make('Acrobot-v1')
master = Master(
env=env,
dt=dt,
stepSize=stepSize
)
model = nengo.Network()
with model:
state_node = nengo.Node(output=master.state)
reward_node = nengo.Node(output=master.reward)
place = nengo.Ensemble(n_neurons=1000,
dimensions=master.state_dim,
radius=place_radius)
nengo.Connection(state_node, place)
actor = nengo.Ensemble(n_neurons=1000,
dimensions=master.action_dim,
radius=actor_radius)
actor_learn_conn = nengo.Connection(place,
actor,
function=lambda x:[0]*master.action_dim,
learning_rule_type=nengo.PES(1e-3, pre_synapse=slow_tau),synapse=tau)
step_node = nengo.Node(output=master.step, size_in=master.action_dim)
nengo.Connection(actor, step_node)
q_node = nengo.Node(master.calculate_Q,size_in=2,size_out=2)
nengo.Connection(actor, q_node, synapse=tau)
nengo.Connection(q_node,actor_learn_conn.learning_rule,transform =-1,synapse=fast_tau) ##0.9*Q(s',a')+r
nengo.Connection(actor,actor_learn_conn.learning_rule,transform =1,synapse=slow_tau)#Q(s,a)
actor_probe = nengo.Probe(actor, synapse=None)
place_probe = nengo.Probe(place, synapse=None)
q_probe = nengo.Probe(q_node, synapse=None)
with nengo.Simulator(model) as sim:
sim.run(10)
master.env.close()
plt.plot(sim.data[q_probe], label='q')
plt.plot(sim.data[actor_probe], label='actor')
# plt.plot(sim.data[critic_probe], label='critic')
# plt.plot(sim.data[place_probe], label='place')
plt.legend()
plt.plot(master.reward_history)
plt.plot(np.convolve(master.reward_history, np.ones(10), 'valid'))
```
| github_jupyter |
# Melon Chart Scraping
아래 언급된 모든 사이트의 스크래핑은 오직 교육 목적으로만 사용되었습니다. <br>
https://www.melon.com/chart/
```
from bs4 import BeautifulSoup
import requests
res = requests.get('https://www.melon.com/chart/')
dir(res)
# response status 확인
res # Response [406]
res.raise_for_status # Response [406]
res.status_code # 406
```
Response가 [200]이 아니기 때문에 스크래핑 불가능!!
→ BeautifulSoup은 Selenium과 달리 보안에 걸리는 경우가 많음
---
# Daum 경제 Scraping
http://media.daum.net/economic/
```
path = 'http://media.daum.net/economic'
res = requests.get(path)
res.status_code # 200
# response 중 html 부분만 가져온다
type(res.content), res.content # bytestring - 영어는 괜찮은데 한글도 bytes로 바뀜
soup = BeautifulSoup(res.content, 'html.parser')
soup # html 문서 그대로 가져옴. 한글도 보임
# text로 불러오는 경우
type(res.text), res.text # str - 한글도 그대로 보임
soup = BeautifulSoup(res.text, 'html.parser')
soup
```
#### 1. select(CSS selector) 사용
```
txt = soup.select('div > strong.tit_thumb > a[href].link_txt') # a[href]: CSS attribute seletor
txt, type(txt), len(txt) # 5개의 ResultSet
# 하나의 태그(Tag object)에서 text만 가져온다
txt[0].text.strip()
# 속성의 이름을 가져온다
txt[0]['href'] # <.....href="--"......> 부분
txt[0].attrs['href'] # 같은 결과
# for loop
for t in txt:
print(t.text.strip()) # 습관적으로 strip()을 붙여주면 좋다
print(t['href'].strip())
```
#### 2. find(tag element) 사용
위와 같은 내용을 find()를 사용한다고 하면, DOM tree에서 찾아 내려가는 과정을 반복해야 하는 번거로움이 있다
```
type(soup.find_all()), type(soup.find()) # 결과는 ResultSet으로 같음
```
find_all()의 경우, 결과가 ResultSet이기 때문에 바로 연속해서 .find 메소드를 사용할 수 없다
```
# CSS selector: div > strong.tit_thumb > a[href].link_txt를 find로 찾기
# soup.find_all('div').find_all('strong', class_='tit_thumb').find_all('a', class_='link_txt')으로 쓸 수 없음
div = soup.find_all('div')
st = div[1].find_all('strong', class_='tit_thumb') # ------------1)
a = []
for s in st:
a.append(s.find('a', class_='link_txt')) # Tag object를 list에 추가
len(a), a # 총 17개로 결과가 다름
```
select()에서 5개였던 결과가 17개로 나온 이유는 뭘까? <br>
* CSS selector가 말하는 div > strong의 뜻은 div 바로 다음에 오는 태그가 strong이라는 것이다. 그런데 위 1)처럼 find_all로 찾는 경우는 div > strong의 경우 뿐만 아니라, div >...>...> strong (div strong)의 경우까지 불러와서 더 많은 결과가 반환된다.
따라서 **find()보다는 select()를 사용**한다!!!
스크래핑 할 selector를 필요에 맞게 가져오기 위해서는, 항상 Chrome 개발자 도구 **Element > ctrl+F**에서 확인한 뒤 복사해온다! <br>
원리를 이해했으면 이후부터는 해당 부분 **우클릭 후 copy**로 selector, xpath 등을 가져오면 더 빠르고 정확하게 가져올 수 있다!
---
# Scraping 결과를 활용할 수 있도록 저장하기
#### 1. Dataframe으로 만들기
```
contents = []
for t in txt:
title = t.text.strip()
link = t['href'].strip()
content = [title, link]
contents.append(content)
contents
len(contents)
import pandas as pd
# list로 dataframe 만들기 - 행으로 들어감
df = pd.DataFrame(contents, columns=['Title', 'Link'])
df
```
#### 2. Excel로 내보내기
Excel workbook의 경우, 한번 저장하면 추가로 더 작성하진 못한다. 내용 추가를 원하는 경우, 전체 workbook을 다시 작성해서 저장해야 한다.
```
df.to_excel('./storage/daum_news.xlsx', sheet_name='economy_news', index=False) # 이미 있는 파일이면 무조건 덮어쓴다
print('저장이 완료되었습니다--------------------------------------')
```
| github_jupyter |
# Algo - distance d'édiction
La distance d'édition ou distance de [Levenshtein](https://en.wikipedia.org/wiki/Levenshtein_distance) permet de calculer une distance entre deux mots et par extension entre deux séquences.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
```
## Enoncé
### Q1 : Distance simple entre deux mots de même longueur
Une distance entre deux mots... c'est plus simple si les deux mots ont la même longueur, on calcule le nombre de différences.
### Q2 : Distance simple entre deux mots de longueur différente
On construit cette distance comme la différence des longueurs + la distance entre le mot le plus court et toutes les sous-séquences de même longueur issues de la chaîne la plus longue.
### Q3 : Distance alambiquée...
Cette fois-ci, on coupe chacun des deux mots en deux, au hasard. On applique la distance précédente sur chacun des deux tronçons. On fait la somme. Il ne reste plus qu'à minimiser cette somme sur l'ensemble des coupures possibles.
### Q4 : implémenter l'algorithme de la page wikipedia
[Levenshtein](https://fr.wikipedia.org/wiki/Distance_de_Levenshtein)
## Réponses
### Q1 : Distance simple entre deux mots de même longueur
Une distance entre deux mots... c'est plus simple si les deux mots ont la même longueur, on calcule le nombre de différences.
```
def distance_meme_longueur(m1, m2):
if len(m1) != len(m2):
raise ValueError("m1 et m2 sont de longueurs différentes")
d = 0
for c1, c2 in zip(m1, m2):
if c1 != c2:
d += 1
return d
distance_meme_longueur('abcef', 'abcde')
```
On vérifie que la fonctionne jette bien une exception lorsque les chaînes de caractères sont de longueurs différentes.
```
try:
distance_meme_longueur('a', 'bb')
except Exception as e:
print(e)
```
### Q2 : Distance simple entre deux mots de longueur différente
On construit cette distance comme la différence des longueurs + la distance entre le mot le plus court et toutes les sous-séquences de même longueur issues de la chaîne la plus longue.
```
def distance(m1, m2):
if len(m1) < len(m2):
return distance(m2, m1)
if len(m1) == len(m2):
return distance_meme_longueur(m1, m2)
d = len(m1) - len(m2)
mind = [distance_meme_longueur(m1[i:i+len(m2)], m2)
for i in range(0, d)]
return d + min(mind)
distance('aa', 'aa'), distance('aa', 'aaa'), distance('aa', 'bbb')
```
### Q3 : Distance alambiquée...
Cette fois-ci, on coupe chacun des deux mots en deux, au hasard. On applique la distance précédente sur chacun des deux tronçons. On fait la somme. Il ne reste plus qu'à minimiser cette somme sur l'ensemble des coupures possibles.
```
def distance_alambiquee(m1, m2):
mini = None
for i in range(len(m1)):
for j in range(len(m2)):
d = distance(m1[:i], m2[:j]) + distance(m1[i:], m2[j:])
if mini is None or d < mini:
mini = d
# Option verlan
d = distance(m1[:i], m2[j:]) + distance(m1[i:], m2[:j]) + 0.5
if d < mini:
mini = d
return mini
(distance('abc', 'ac'),
distance_alambiquee('abc', 'ac'),
distance_alambiquee('abc', 'ca'),
distance_alambiquee('b', 'b'))
```
### Q4 : implémenter l'algorithme de la page wikipedia
[Levenshtein](https://fr.wikipedia.org/wiki/Distance_de_Levenshtein)
La première implémentation reprend l'algorithme décrit sur la page wikipédia.
```
def levenstein(m1, m2):
d = {}
d[0,0] = 0
for i in range(len(m1) + 1):
d[i, 0] = i
for j in range(len(m2) + 1):
d[0, j] = j
for i in range(1, len(m1) + 1):
for j in range(1, len(m2) + 1):
d[i, j] = min(d[i-1,j] +1, d[i,j-1] +1,
d[i-1, j-1] + (1 if m1[i-1] != m2[j-1] else 0))
return d[len(m1), len(m2)]
levenstein('abc', 'ac')
```
La seconde version est plus alambiquée, elle modifie légèrement la version alambiquée. C'est une version récursive.
```
def distance_alambiquee_levenstein(m1, m2):
mini = None
for i in range(len(m1)):
for j in range(len(m2)):
if i > 0 and i < len(m1) - 1 and j > 0 and j < len(m2) - 1:
d1 = distance_alambiquee_levenstein(m1[:i], m2[:j])
d2 = distance_alambiquee_levenstein(m1[i:], m2[j:])
else:
d1 = distance(m1[:i], m2[:j])
d2 = distance(m1[i:], m2[j:])
d = d1 + d2
if mini is None or d < mini:
mini = d
return mini
(distance_alambiquee('abcde', 'ace'),
levenstein('abcde', 'ace'),
distance_alambiquee_levenstein('abcde', 'ace'))
```
| github_jupyter |
# Data Bootcamp: Exam practice & review
We review the material we've covered to date: Python fundamentals, data input with Pandas, and graphics with Matplotlib. Questions marked *Bonus* are more difficult and are there to give the experts something to do.
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course [Data Bootcamp](http://databootcamp.nyuecon.com/).
This version was modified by (add your name in bold here). And add your initials to the notebook's name at the top.
## Preliminaries
Import packages and check the date.
```
# import packages
import pandas as pd # data management
import matplotlib.pyplot as plt # graphics
# IPython command, puts plots in notebook
%matplotlib inline
# check Python version
import datetime as dt
import sys
print('Today is', dt.date.today())
print('What version of Python are we running? \n', sys.version, sep='')
```
## IPython review
We review some of the basics of IPython. You won't be asked about IPython on the exam, but since the exam is an IPython notebook, it's essential for you to be able to work with one.
**Question 1.**
1. How do you set/choose the current cell?
2. How do you edit the current cell?
3. How do you add a new cell below the current cell?
4. How do you specify the current cell as code or text?
5. How do you delete the current cell?
6. How do you move the current cell up or down?
7. How do you run the current cell?
8. Add your name in bold to the bottom of the first cell in this notebook. *Bonus:* Add a link to your LinkedIn or Facebook page.
9. How do you save the contents of your notebook?
**Answers.** Enter your answers below:
1.
2.
3.
4.
5.
6.
7.
8.
9.
## Python fundamentals
**Question 2.** Describe the type and content of these expressions:
1. `x = 2`
2. `y = 3.0`
3. `z = "3.0"`
4. `x/y`
5. `letters = 'abcd'`
6. `letters[-1]`
7. `xyz = [x, y, z]`
8. `xyz[1]`
9. `abcd = list(letters)`
10. `abcd[-2]`
11. `case = {'a': 'A', 'b': 'B', 'c': 'C'}`
12. `case['c']`
13. `2 >= 1`
14. `x == 2`
**Answers.** Enter your answers below:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
```
# code cell for experimenting
```
**Question 3.** These get progressively more difficult:
1. What type is `dollars = '$1,234.5'`?
2. Find and apply a method that eliminates the dollar sign from `dollars`.
3. Find and apply a method that eliminates the comma from `dollars`.
4. Eliminate both the dollar sign and comma from `dollars` and covert the result to a float.
5. Combine the last three steps in one line.
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
**Question 4.**
For this problem we set `letters = 'abcd'` as in problem 2.
1. Find and apply a method that converts the lower case letter `'a'` to the upper case letter `'A'`.
2. Write a loop that goes through the elements of `letters` and prints their upper case versions.
3. *Bonus:* Write a loop that goes through the elements of `letters`. On each interation, print a string consisting of the upper and lower case versions together; eg, `'Aa'`.
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
**Question 5.**
For this problem `xyz` is the same as defined in problem 2
1. Write a loop that goes through the elements of `xyz` and prints them.
2. Modify the loop to print both the elements of `xyz` and their type.
3. Modify the loop to print only those elements that are not strings.
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
## Data input with Pandas
We explore the public indebtedness of Argentina (country code ARG), Germany (DEU), and Greece (GRC). For each one, we provide the ratio of government debt to GDP for every second year starting in 2002. The data come from the IMF's World Economic Outlook.
**Question 6.** Write code in the cell below that reads the csv file we posted at
http://pages.stern.nyu.edu/~dbackus/Data/debt.csv
Assign the contents of the file to the object `debt`.
The rest of the questions in this notebook will refer to the object `debt` you create below.
```
# if that failed, you can generate the same data with
data = {'ARG': [137.5, 106.0, 61.8, 47.0, 39.1, 37.3, 48.6],
'DEU': [59.2, 64.6, 66.3, 64.9, 80.3, 79.0, 73.1],
'GRC': [98.1, 94.9, 102.9, 108.8, 145.7, 156.5, 177.2],
'Year': [2002, 2004, 2006, 2008, 2010, 2012, 2014]}
debt = pd.DataFrame(data)
```
**Question 7.** Let's describe the object `debt`:
1. What type of object is `debt`?
2. What are its dimensions?
3. What are its column labels? Row labels?
4. What dtypes are the columns?
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
**Question 8.** Do the following with `debt`:
1. Set `Year` as the index.
2. Change the column labels from country codes to country names. Do this using both a dictionary and a list.
3. Print the result to verify your changes.
The next three get progressively more difficult:
4. Compute the mean (average) debt for each country.
5. *Bonus:* Compute the mean debt for each year.
6. *Bonus:* Compute the mean debt over both countries and years.
Some simple plots:
7. Plot each country's debt against `Year` using a `plot` method.
8. Change the linewidth to 2.
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
## Python graphics with Matplotlib
We'll continue to use the data in `debt`. **Make sure the index is the year.**
**Question 9.**
1. Create figure and axis objects with `plt.subplots()`.
2. Graph public indebtedness over time using our `debt` data and the axis object we just created.
3. Change the line width to 2.
4. Change the colors to `['red', 'green', 'blue']`.
5. Change the lower limit on the y axis to zero.
6. Add a title to the graph.
7. Add a label to the y axis -- something like "Public Debt (% of GDP)".
8. *Bonus:* Make the line for Argentina thicker than the others. *Hint:* Do this by `plot`ting a separate line applied to the same axis object.
In each case, create a code cell that delivers the answer. Please write the question number in a comment in each cell.
## Optional challenging questions
Good practice, but more than you'll see on the exam.
**Question 10.** In the figure of the previous question:
1. Add a title, 14-point font, right-justified.
2. Put the legend in the lower left corner.
3. Change the line style to dashed. (This will take some Googling, or a good guess.)
4. Eliminate the top and right "spines," the lines that outline the figure.
5. Save the figure as a pdf file.
6. Change the style to 538.
**Question 11.** We ran across this one in the OECD healthcare data. The country names had numbers appended, which served as footnotes in the original spreadsheet but looked dumb when we used them as index labels. The question is how to eliminate them. A short version of the country names is
`names = ['Australia 1', 'Canada 2', 'Chile 3', 'United States 1']`
Do each of these in a separate code cell:
1. Apply the `rsplit()` method to `us = names[-1]`. What do you get?
2. Consult the documentation for `rsplit` to split `us` into two pieces, the country name and the number 1. How would you extract just the country name?
3. Use a loop to strip the numbers from all of the elements of `names`.
4. Use a list comprehension to strip the numbers from all of the elements of `names`.
| github_jupyter |
Computers are designed to perform numerical calculations, but there are some important details about working with numbers that every programmer working with quantitative data should know. Python (and most other programming languages) distinguishes between two different types of numbers:
* Integers are called `int` values in the Python language. They can only represent whole numbers (negative, zero, or positive) that don't have a fractional component
* Real numbers are called `float` values (or *floating point values*) in the Python language. They can represent whole or fractional numbers but have some limitations.
The type of a number is evident from the way it is displayed: `int` values have no decimal point and `float` values always have a decimal point.
```
# Some int values
2
1 + 3
-1234567890000000000
# Some float values
1.2
3.0
```
When a `float` value is combined with an `int` value using some arithmetic operator, then the result is always a `float` value. In most cases, two integers combine to form another integer, but any number (`int` or `float`) divided by another will be a `float` value. Very large or very small `float` values are displayed using scientific notation.
```
1.5 + 2
3 / 1
-12345678900000000000.0
```
The `type` function can be used to find the type of any number.
```
type(3)
type(3 / 1)
```
The `type` of an expression is the type of its final value. So, the `type` function will never indicate that the type of an expression is a name, because names are always evaluated to their assigned values.
```
x = 3
type(x) # The type of x is an int, not a name
type(x + 2.5)
```
## More About Float Values
Float values are very flexible, but they do have limits.
1. A `float` can represent extremely large and extremely small numbers. There are limits, but you will rarely encounter them.
2. A `float` only represents 15 or 16 significant digits for any number; the remaining precision is lost. This limited precision is enough for the vast majority of applications.
3. After combining `float` values with arithmetic, the last few digits may be incorrect. Small rounding errors are often confusing when first encountered.
The first limit can be observed in two ways. If the result of a computation is a very large number, then it is represented as infinite. If the result is a very small number, then it is represented as zero.
```
2e306 * 10
2e306 * 100
2e-322 / 10
2e-322 / 100
```
The second limit can be observed by an expression that involves numbers with more than 15 significant digits. These extra digits are discarded before any arithmetic is carried out.
```
0.6666666666666666 - 0.6666666666666666123456789
```
The third limit can be observed when taking the difference between two expressions that should be equivalent. For example, the expression `2 ** 0.5` computes the square root of 2, but squaring this value does not exactly recover 2.
```
2 ** 0.5
(2 ** 0.5) * (2 ** 0.5)
(2 ** 0.5) * (2 ** 0.5) - 2
```
The final result above is `0.0000000000000004440892098500626`, a number that is very close to zero. The correct answer to this arithmetic expression is 0, but a small error in the final significant digit appears very different in scientific notation. This behavior appears in almost all programming languages because it is the result of the standard way that arithmetic is carried out on computers.
Although `float` values are not always exact, they are certainly reliable and work the same way across all different kinds of computers and programming languages.
| github_jupyter |
# Implementing a Route Planner
In this project you will use A\* search to implement a "Google-maps" style route planning algorithm.
```
# Run this cell first!
from helpers import Map, load_map, show_map
from student_code import shortest_path
%load_ext autoreload
%autoreload 2
```
### Map Basics
```
map_10 = load_map('map-10.pickle')
show_map(map_10)
```
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.
These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`
**Intersections**
The `intersections` are represented as a dictionary.
In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
```
map_10.intersections
```
**Roads**
The `roads` property is a list where, if `i` is an intersection, `roads[i]` contains a list of the intersections that intersection `i` connects to.
```
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map('map-40.pickle')
show_map(map_40)
```
### Advanced Visualizations
The map above shows a network of roads which spans 40 different intersections (labeled 0 through 39).
The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizaing the output of the search algorithm you will write.
* `start` - The "start" node for the search algorithm.
* `goal` - The "goal" node.
* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
```
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
```
### Writing your algorithm
You should open the file `student_code.py` in another tab and work on your algorithm there. Do that by selecting `File > Open` and then selecting the appropriate file.
The algorithm you write will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`
```bash
> shortest_path(map_40, 5, 34)
[5, 16, 37, 12, 34]
```
```
path = shortest_path(map_40, 5, 34)
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
```
### Testing your Code
If the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:
**Submission Checklist**
1. Does my code pass all tests?
2. Does my code implement `A*` search and not some other search algorithm?
3. Do I use an **admissible heuristic** to direct search efforts towards the goal?
4. Do I use data structures which avoid unnecessarily slow lookups?
When you can answer "yes" to all of these questions, submit by pressing the Submit button in the lower right!
```
from test import test
test(shortest_path)
```
| github_jupyter |
<img src="https://s8.hostingkartinok.com/uploads/images/2018/08/308b49fcfbc619d629fe4604bceb67ac.jpg" width=500, height=450>
<h3 style="text-align: center;"><b>Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ</b></h3>
---
<h2 style="text-align: center;"><b>Домашнее задание: нейрон с разными функциями активации</b></h2>
---
### Сначала необходимо решить ноутбуки `[seminar]perceptron_new.ipynb` и `[seminar]neuron_new.ipynb`!
**Очень часто спрашивают -- а какую функции активации стоит выбрать?** В этом ноутбуке вам предлагается самим дойти до истины и сравнить нейроны с различными функциями активации (их качестве на двух выборках). Не забудьте убедиться, что все эксперименты с разными видами нейронов вы проводите в одинаковых условиях (иначе ведь эксперимент будет нечестным).
В данном задании Вам нужно будет:
- самостоятельно реализовать класс **`Neuron()`** с разными функциями активации
- обучить и протестировать этот класс на сгенерированных и реальных данных (файлы с реальными данными помещены в папку /data в этой же директории)
В данном ноутбуке Вам предстоит реализовать нейрон с разными функциями активации: Sigmoid, ReLU, LeakyReLU и ELU.
```
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd
```
***В этом ноутбуке используется функция `numpy.random.rand`. Чтобы иметь неслучайные значения, которые помогут нам
проверить ответы на задание, мы введем свою функцию, которая принимает `seed`. Такая функция будет выдавать
одинаковые значения при одном и том же seed. Не стоит менять seed далее в коде, иначе ответы могут не сойтись.***
```
def seed_random(seed, *args):
np.random.seed(seed)
return np.random.rand(*args)
```
---
В данном случае мы снова решаем задачу бинарной классификации (2 класса: 1 или 0):
$$
Loss(\hat{y}, y) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2
$$
Здесь $w \cdot X_i$ - скалярное произведение, а $\sigma(w \cdot X_i) =\frac{1}{1+e^{-w \cdot X_i}} $ - сигмоида ($i$ -- номер объекта в выборке).
**Примечание:** Здесь предполагается, что $b$ - свободный член - является частью вектора весов: $w_0$. Тогда, если к $X$ приписать единичный столбец, в скалярном произведении $b$ будет именно как свободный член.
```
def Loss(y_pred, y):
return ((y_pred - y) ** 2).mean() / 2.0
```
Далее будут предложены несколько функций активации, и вам нужно реализовать класс `Neuron` по аналогии с тем, как это было на семинаре. Сам принцип тот же, но меняются формула обновления весов и формула предсказания.
**Система такая**: Будут три функции активации, у первой все формулы даны, нужно лишь закодить. У второй будет написана производная, но не будет подставлена в $Loss$, это нужно сделать вам. У третьей будет лишь сама формула функции.
<h2 style="text-align: center;"><b>Нейрон с ReLU (Recitified Linear Unit)</b></h2>
ReLU самая часто используемая (по крайней мере, пару лет назад) функция активации в нейронных сетях. Выглядит она очень просто:
\begin{equation*}
ReLU(x) =
\begin{cases}
0, &\text{$x \le 0$}\\
x, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
Или по-другому:
$$
ReLU(x) = \max(0, x)
$$
В (свободном) переводе Rectified Linear Unit = "Усечённая линейная функция". Собственно, мы по сути просто не даём проходить отрицательным числам.
Производная здесь берётся как производная от кусочно-заданной функции, то есть на участках, где функция гладкая (x < 0 и x > 0), берем производную как обычно, и в нуле её доопредляем ее нулём:
\begin{equation*}
ReLU'(x) =
\begin{cases}
0, &\text{$x \le 0$}\\
1, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
График этой функции и её производной выглядят так:
<img src="https://upload-images.jianshu.io/upload_images/1828517-0828da0d1164c024.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" width=800, height=400>
Подставим ReLu в Loss:
$$Loss(\hat{y}, y) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (ReLU(w \cdot X_i) - y_i)^2 = \begin{equation*}
\frac{1}{2n}\sum_{i=1}^{n}
\begin{cases}
y_i^2, &{w \cdot X_i \le 0}\\
(w \cdot X_i - y_i)^2, &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
(помните, что $w \cdot X_i$ -- это число в данном случае (результат скалярного произведения двух векторов).
Тогда формула для обновления весов при градиентном спуске будет такая (в матричном виде; рекомендуем вывести самим то, как это получается из формулы для одного объекта (см. `[seminar]neuron.ipynb`):
$$ \frac{\partial Loss}{\partial w} = \begin{equation*}
\sum_{i=1}^{n}
\begin{cases}
0, &{w \cdot X_i \le 0}\\
\frac{1}{n} X_i^T (w \cdot X_i - y), &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
(напоминаем, что здесь $w \cdot X$ -- матричное произведение вектора $w$ (ведь вектор -- тоже матрица, не так ли?) и матрицы $X$)
Почему в первом случае будет 0? Потому что в формулу $y_i^2$ не входят веса , а мы берём производную именно по весам $w$.
* Реализуйте ReLU и её производную:
```
def relu(x):
"""ReLU-функция"""
return x * (x > 0)
def relu_derivative(y):
"""Производная ReLU. Мы вычисляем ее не по x, который подставили в ReLU, а по значению, который она вернула.
На самом деле, мы могли бы так не делать и вычислять производную по x (код при этом даже не поменялся бы),
но обычно на стадии обратного прохода у нас уже нет X @ w, который мы передавали в функцию, зато есть
вычисленное значение активации - тот самый y"""
return 1. * (y > 0)
```
Теперь нужно написать нейрон с ReLU. Здесь всё очень похоже на перцептрон, но будут по-другому обновляться веса и другая функция активации:
```
class NeuronReLU:
def __init__(self, w=None, b=0):
"""
:param: w -- вектор весов
:param: b -- смещение
"""
# Пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов
self.w = w
self.b = b
def activate(self, x):
return relu(x)
def forward_pass(self, X):
"""
Рассчитывает ответ нейрона при предъявлении набора объектов
:param: X -- матрица примеров размера (n, m), каждая строка - отдельный объект
:return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона
"""
n = X.shape[0]
y_pred = np.zeros((n, 1)) # y_pred == y_predicted - предсказанные классы
y_pred = self.activate(X @ self.w.reshape(X.shape[1], 1) + self.b)
return y_pred.reshape(-1, 1)
def backward_pass(self, X, y, y_pred, learning_rate=0.005):
"""
Обновляет значения весов нейрона в соответствии с этим объектом
:param: X -- матрица входов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
learning_rate - "скорость обучения" (символ alpha в формулах выше)
В этом методе ничего возвращать не нужно, только правильно поменять веса
с помощью градиентного спуска.
"""
n = len(y)
y = np.array(y).reshape(-1, 1)
z = X @ self.w.reshape(X.shape[1], 1) + self.b
self.w -= learning_rate * (X.T @ ((z - y) * relu_derivative(z))) / n
self.b -= learning_rate * np.mean((z - y) * relu_derivative(z))
def fit(self, X, y, num_epochs=300):
"""
Спускаемся в минимум
:param: X -- матрица объектов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
num_epochs -- количество итераций обучения
:return: losses -- вектор значений функции потерь
"""
if self.w is None:
self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1)
self.b = 0 # смещение (число)
Loss_values = [] # значения функции потерь на различных итерациях обновления весов
for i in range(num_epochs):
# предсказания с текущими весами
y_pred = self.forward_pass(X)
# считаем функцию потерь с текущими весами
Loss_values.append(Loss(y_pred, y))
# обновляем веса в соответсвие с тем, где ошиблись раньше
self.backward_pass(X, y, y_pred)
return Loss_values
```
<h3 style="text-align: center;"><b>Тестирование нейрона с ReLU</b></h3>
Здесь Вам нужно самим протестировать новый нейрон **на тех же данных ("Яблоки и Груши")** по аналогии с тем, как это было проделано с перцептроном (можете смело копировать код, только будьте осторожны - кое-что в нём всё же скорее всего придётся поправить).
В итоге нужно вывести:
* график, на котором будет показано, как изменяется функция потерь $Loss$ в зависимости от числа итераций обучения
* график с раскраской выборки сигмоидальным нейроном
***ПРИМЕЧАНИЕ***: пожалуйста, почаще проверяйте `.shape` у матриц и векторов: `self.w`, `X` и `y` внутри класса. Очень часто ошибка решается транспонированием или `.reshape()`'ом. Не забывайте проверять, что на что вы умножаете и какой вектор (какой размер) хотите получить на выходе -- это очень помогает не запутаться.
**(для теста) Проверка forward_pass()**
```
w = np.array([1., 2.]).reshape(2, 1)
b = 2.
X = np.array([[1., 3.],
[2., 4.],
[-1., -3.2]])
neuron = NeuronReLU(w, b)
y_pred = neuron.forward_pass(X)
print ("y_pred = " + str(y_pred))
```
*Hint: "**-0.**" -- это просто ноль*
**(для теста) Проверка backward_pass()**
Просьба **не менять `learning rate=0.005` по-умолчанию**.
```
y = np.array([1, 0, 1]).reshape(3, 1)
neuron.backward_pass(X, y, y_pred)
print ("w = " + str(neuron.w))
print ("b = " + str(neuron.b))
```
"Яблоки и Груши":
```
data = pd.read_csv("./data/apples_pears.csv")
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
X = data.iloc[:,:2].values # матрица объекты-признаки
y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц)
```
Выведите лосс при обучении нейрона с ReLU на этом датасете:
```
%%time
neuron = NeuronReLU()
Loss_values = neuron.fit(X, y)
plt.figure(figsize=(10, 8))
plt.plot(Loss_values)
plt.title('Функция потерь', fontsize=15)
plt.xlabel('номер итерации', fontsize=14)
plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14)
plt.show()
```
Скорее всего сейчас у вас лосс -- это прямая линия, и вы видите, что веса не обновляются. Но почему?!
Всё просто -- возможно мы об этом вам ещё не говорили, но если присмотреться, то видно, что `self.w` и `self.b` иницилизируются нулями в начале `.fit()`-метода. Если расписать, как будет идти обновление, то видно, что из-за ReLU веса просто-напросто не будут обновляться, если начать с инициализации нулями.
Это -- одна из причин, по которой в нейронных сетях веса инициализируют случаными числами (обычно из отрезка [0, 1)).
Обучим нейрон, инициализировав случайно веса, **используя нашу функция `seed_random` с `seed=42 и 43` из начала ноутбука** (поставьте 10000 итераций), а также просьба **не менять `learning rate=0.005` по-умолчанию**.
```
%%time
neuron = NeuronReLU(w=seed_random(42, X.shape[1], 1), b=seed_random(43, 1))
Loss_values = neuron.fit(X, y, num_epochs=10000) # num_epochs=10000
plt.figure(figsize=(10, 8))
plt.plot(Loss_values)
plt.title('Функция потерь', fontsize=15)
plt.xlabel('номер итерации', fontsize=14)
plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14)
plt.show()
```
**(для теста) Проверка лосса:**
Выведите сумму первых пяти и последних пяти значений loss'а при обучении с num_epochs=10000, округлите до 4-го знака после запятой:
```
print(round(np.sum(Loss_values[:5]) + np.sum(Loss_values[-5:]), 4))
```
Посмотрим, как предсказывает этот нейрон:
```
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5).ravel(), cmap='spring')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
```
Должно разделиться более-менее неплохо. Но почему мы берём якобы "прокаченную" ReLU, которая часто используется, и она предсказывает хуже (намного дольше сходится), чем перцептрон с пороговой, который никто не использует? Вообще, где и какая функция активации "выстрелит" -- никто не знает заранее, это зависит в том числе от самих данных.
<img src="https://alumni.lscollege.ac.uk/files/2015/12/Interview-questions-square-image.jpg" width=400 height=300>
Но есть одна тенденция: пороговая функция активации и сигмоида (обычно всё же только сигмоида) чаще используются именно на **выходном слое** нейросети в задаче классификации -- ими предсказывают вероятности объектов принадлежать одному из классов, в то время как продвинутые функции активации (ReLU и те, что будут дальше) используются внутри нейросети, то есть на **скрытых слоях**.
ReLU как будто моделирует то, что нейрон "загорается" по аналогии с тем, как это происходит с биологическим, поэтому помещаеть ReLU на выходной слой обычно плохая идея.
Однако ничто не мешает помещать ReLU на выходной слой, а сигмоиду -- внутрь. Deep Learning -- "очень экспериментальная" область: вы можете сделать открытие своими собственными руками, просто поменяв что-то незначительное, например, функцию активации.
**Плюсы ReLU:**
* дифференцируемая (с доопределнием в нуле)
* нет проблемы затухающих градиентов, как в сигмоиде
**Возможные минусы ReLU:**
* не центрирована около 0 (может мешать скорости сходимсти)
* зануляет все отрицательные входы, тем самым веса у занулённых нейронов могут часто *не обновляться*, эту проблему иногда называют *мёртвые нейроны*
С последней проблемой можно побороться, а именно:
<h2 style="text-align: center;"><b>Нейрон с LeakyReLU (Leaky Recitified Linear Unit)</b></h2>
LeakyReLU очень слабо отличается от ReLU, но часто помогает сети обучаться быстрее, поскольку нет проблемы "мёртвых нейронов":
\begin{equation*}
LeakyReLU(x) =
\begin{cases}
\alpha x, &\text{$x \le 0$}\\
x, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
где $\alpha$ -- маленькое число от 0 до 1.
Производная здесь берётся так же, но вместо нуля будет $\alpha$:
\begin{equation*}
LeakyReLU'(x) =
\begin{cases}
\alpha, &\text{$x \le 0$}\\
1, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
График этой функции:
<img src="https://cdn-images-1.medium.com/max/1600/0*UtLlZJ80TMIM7kXk." width=400 height=300>
Подставим LeakyReLu в Loss:
$$
Loss(\hat{y}, y) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (LeakyReLU(w \cdot X_i) - y_i)^2 =
\begin{equation*}
\frac{1}{2n}\sum_{i=1}^{n}
\begin{cases}
(\alpha \cdot w \cdot X_i - y_i)^2, &{w \cdot X_i \le 0}\\
(w \cdot X_i - y_i)^2, &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}
$$
Формула для обновления весов при градиентном спуске:
$$ \frac{\partial Loss}{\partial w} = \begin{equation*}
\frac{1}{n}\sum_{i=1}^{n}
\begin{cases}
\alpha X_i^T (w \cdot X_i - y), &{w \cdot X_i \le 0}\\
X_i^T (w \cdot X_i - y), &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
* Реализуйте LeakyReLU и её производную:
```
def leaky_relu(x, alpha=0.01):
"""LeakyReLU-функция"""
return x * (x > 0) + alpha * x * (x <= 0)
def leaky_relu_derivative(y, alpha=0.01):
"""Производная LeakyReLU. Тут мы тоже вычисляем производную по y. Пояснения, почему мы так делаем,
есть выше"""
return 1. * (y > 0) + alpha * (y <= 0)
```
Теперь нужно написать нейрон с LeakyReLU функцией активации. Здесь всё очень похоже на перцептрон, но будут по-другому обновляться веса и другая функция активации:
```
class NeuronLeakyReLU:
def __init__(self, w=None, b=0):
"""
:param: w -- вектор весов
:param: b -- смещение
"""
# Пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов
self.w = w
self.b = b
def activate(self, x):
return leaky_relu(x)
def forward_pass(self, X):
"""
Рассчитывает ответ нейрона при предъявлении набора объектов
:param: X -- матрица примеров размера (n, m), каждая строка - отдельный объект
:return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона
"""
n = X.shape[0]
y_pred = np.zeros((n, 1)) # y_pred == y_predicted - предсказанные классы
y_pred = self.activate(X @ self.w.reshape(X.shape[1], 1) + self.b)
return y_pred.reshape(-1, 1)
def backward_pass(self, X, y, y_pred, learning_rate=0.005):
"""
Обновляет значения весов нейрона в соответствии с этим объектом
:param: X -- матрица входов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
learning_rate - "скорость обучения" (символ alpha в формулах выше)
В этом методе ничего возвращать не нужно, только правильно поменять веса
с помощью градиентного спуска.
"""
n = len(y)
y = np.array(y).reshape(-1, 1)
z = X @ self.w.reshape(X.shape[1], 1) + self.b
self.w -= learning_rate * (X.T @ ((z - y) * leaky_relu(z))) / n
self.b -= learning_rate * np.mean((z - y) * leaky_relu(z))
def fit(self, X, y, num_epochs=300):
"""
Спускаемся в минимум
:param: X -- матрица объектов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
num_epochs -- количество итераций обучения
:return: losses -- вектор значений функции потерь
"""
if self.w is None:
self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1)
self.b = 0 # смещение (число)
Loss_values = [] # значения функции потерь на различных итерациях обновления весов
for i in range(num_epochs):
# предсказания с текущими весами
y_pred = self.forward_pass(X)
# считаем функцию потерь с текущими весами
Loss_values.append(Loss(y_pred, y))
# обновляем веса в соответсвие с тем, где ошиблись раньше
self.backward_pass(X, y, y_pred)
return Loss_values
```
<h3 style="text-align: center;"><b>Тестирование нейрона с LeakyReLU</b></h3>
***ПРИМЕЧАНИЕ***: пожалуйста, почаще проверяйте `.shape` у матриц и векторов: `self.w`, `X` и `y` внутри класса. Очень часто ошибка решается транспонированием или `.reshape()`'ом. Не забывайте проверять, что на что вы умножаете и какой вектор (какой размер) хотите получить на выходе -- это очень помогает не запутаться.
**Везде далее для тестирования не меняйте $\alpha$=0.01 в `leaky_relu()` и в `leaky_relu_derivative()`**
Просьба **не менять `learning rate=0.005` по-умолчанию**.
**(для теста) Проверка forward_pass()**
```
w = np.array([1., 2.]).reshape(2, 1)
b = 2.
X = np.array([[1., 3.],
[2., 4.],
[-1., -3.2]])
neuron = NeuronLeakyReLU(w, b)
y_pred = neuron.forward_pass(X)
print ("y_pred = " + str(y_pred))
```
*Hint: "**-0.**" -- это просто ноль*
**(для теста) Проверка backward_pass()**
Просьба **не менять `learning rate=0.005` по-умолчанию**.
```
y = np.array([1, 0, 1]).reshape(3, 1)
neuron.backward_pass(X, y, y_pred)
print ("w = " + str(neuron.w))
print ("b = " + str(neuron.b))
```
"Яблоки и Груши":
```
data = pd.read_csv("./data/apples_pears.csv")
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
X = data.iloc[:,:2].values # матрица объекты-признаки
y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц)
```
Обучим нейрон, инициализировав случайно веса (поставьте 10000 итераций).
Просьба **не менять `learning rate=0.005` по-умолчанию**.
```
%%time
neuron = NeuronLeakyReLU(w=seed_random(13, X.shape[1], 1), b=seed_random(14, 1))
Loss_values = neuron.fit(X, y, num_epochs=10000) # num_epochs=10000
plt.figure(figsize=(10, 8))
plt.plot(Loss_values)
plt.title('Функция потерь', fontsize=15)
plt.xlabel('номер итерации', fontsize=14)
plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14)
plt.show()
```
**(для теста) Проверка лосса:**
Выведите сумму первых пяти и последних пяти значений loss'а при обучении с num_epochs=10000, округлите до 4-го знака после запятой:
```
print(round(np.sum(Loss_values[:5]) + np.sum(Loss_values[-5:]), 4))
```
Посмотрим, как предсказывает этот нейрон:
```
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5).ravel(), cmap='spring')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
```
**Плюсы LeakyReLU:**
* дифференцируемая (с доопределнием в нуле)
* нет проблемы затухающих градиентов, как в сигмоиде
* нет проблемы "мёртвых нейронов", как в ReLU
**Возможные минусы LeakyReLU:**
* не центрирована около 0 (может мешать скорости сходимсти)
* немного не устойчива к "шуму" (см. лекции Стэнфорда)
<h2 style="text-align: center;"><b>Нейрон с ELU (Exponential Linear Unit)</a></b></h2>
<h2 style="text-align: center;"><b>(необязательная часть, проверяться не будет)</b></h2>
ELU -- не так давно предложенная (в 2015 году) функция активации, которая, как говорят авторы статьи, лучше LeakyReLU. Вот формула ELU:
\begin{equation*}
ELU(\alpha, x) =
\begin{cases}
\alpha (e^x - 1), &\text{$x \le 0$}\\
x, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
где $\alpha$ -- маленькое число от 0 до 1.
Производная здесь берётся так же, но вместо нуля будет $\alpha$:
\begin{equation*}
ELU'(x) =
\begin{cases}
ELU(\alpha, x) + \alpha, &\text{$x \le 0$}\\
1, &\text{$x \gt 0$}
\end{cases}
\end{equation*}
Здесь в производной использован постой трюк -- сделано $- \alpha + \alpha$, чтобы вычислять было проще.
График этой функции:
<img src="http://p0.ifengimg.com/pmop/2017/0907/A004001DD141881BFD8AD62E5D31028C3BE3FAD1_size14_w446_h354.png" width=500 height=400>
Подставим LeakyReLu в Loss:
$$Loss(\hat{y}, y) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (ELU(\alpha, w \cdot X_i) - y_i)^2 = \begin{equation*}
\frac{1}{2n}\sum_{i=1}^{n}
\begin{cases}
(\alpha (e^{w \cdot X_i} - 1) - y_i)^2, &{w \cdot X_i \le 0}\\
(w \cdot X_i - y_i)^2, &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
Формула для обновления весов при градиентном спуске.. Здесь вам нужно выписать её самим, и это чуть сложнее, чем раньше. Брать производную "в лоб" некрасиво и неудобно. Нужно воспользоваться **правилом цепочки**, оно же **правило взятия производной сложной функции**:
$$ \frac{\partial Loss}{\partial w} = \begin{equation*}
\frac{1}{n}\sum_{i=1}^{n}
\begin{cases}
X_i^T (ELU(\alpha, x) + \alpha) (\alpha (e^{w \cdot X_i} - 1) - y_i), &{w \cdot X_i \le 0}\\
X_i^T (w \cdot X_i - y_i), &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
* Реализуйте ELU и её производную:
```
def elu(x, alpha=0.01):
"""ELU-функция"""
return (alpha * (np.exp(x) - 1)) * (x <= 0) + x * (x > 0)
def elu_derivative(y, alpha=0.01):
"""Производная ELU, снова вычисляем производную по значению"""
return elu(y, alpha) * (y <= 0) + 1.0 * (y > 0)
```
Теперь нужно написать нейрон с ELU функцией активации:
```
class NeuronELU:
def __init__(self, w=None, b=0):
"""
:param: w -- вектор весов
:param: b -- смещение
"""
# Пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов
self.w = w
self.b = b
def activate(self, x):
return elu(x)
def forward_pass(self, X):
"""
Рассчитывает ответ нейрона при предъявлении набора объектов
:param: X -- матрица примеров размера (n, m), каждая строка - отдельный объект
:return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона
"""
n = X.shape[0]
y_pred = np.zeros((n, 1)) # y_pred == y_predicted - предсказанные классы
y_pred = self.activate(X @ self.w.reshape(X.shape[1], 1) + self.b)
return y_pred.reshape(-1, 1)
def backward_pass(self, X, y, y_pred, learning_rate=0.005):
"""
Обновляет значения весов нейрона в соответствии с этим объектом
:param: X -- матрица входов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
learning_rate - "скорость обучения" (символ alpha в формулах выше)
В этом методе ничего возвращать не нужно, только правильно поменять веса
с помощью градиентного спуска.
"""
n = len(y)
y = np.array(y).reshape(-1, 1)
z = X @ self.w.reshape(X.shape[1], 1) + self.b
self.w -= learning_rate * (X.T @ (elu_derivative(z) * (elu(z) - y))) / n
self.b -= learning_rate * np.mean(elu_derivative(z) * (elu(z) - y))
def fit(self, X, y, num_epochs=300):
"""
Спускаемся в минимум
:param: X -- матрица объектов размера (n, m)
y -- вектор правильных ответов размера (n, 1)
num_epochs -- количество итераций обучения
:return: losses -- вектор значений функции потерь
"""
if self.w is None:
self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1)
self.b = 0 # смещение (число)
Loss_values = [] # значения функции потерь на различных итерациях обновления весов
for i in range(num_epochs):
# предсказания с текущими весами
y_pred = self.forward_pass(X)
# считаем функцию потерь с текущими весами
Loss_values.append(Loss(y_pred, y))
# обновляем веса в соответсвие с тем, где ошиблись раньше
self.backward_pass(X, y, y_pred)
return Loss_values
```
$$ \frac{\partial Loss}{\partial w} = \begin{equation*}
\frac{1}{n}\sum_{i=1}^{n}
\begin{cases}
X_i^T (ELU(\alpha, x) + \alpha) (\alpha (e^{w \cdot X_i} - 1) - y_i), &{w \cdot X_i \le 0}\\
X_i^T (w \cdot X_i - y_i), &{w \cdot X_i \gt 0}
\end{cases}
\end{equation*}$$
***ПРИМЕЧАНИЕ***: пожалуйста, почаще проверяйте `.shape` у матриц и векторов: `self.w`, `X` и `y` внутри класса. Очень часто ошибка решается транспонированием или `.reshape()`'ом. Не забывайте проверять, что на что вы умножаете и какой вектор (какой размер) хотите получить на выходе -- это очень помогает не запутаться.
"Яблоки и Груши":
```
data = pd.read_csv("./data/apples_pears.csv")
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
X = data.iloc[:,:2].values # матрица объекты-признаки
y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц)
```
Обучим нейрон, инициализировав случайно веса (поставьте 10000 итераций):
```
%%time
neuron = NeuronELU(w=seed_random(10, X.shape[1], 1), b=seed_random(11, 1))
Loss_values = neuron.fit(X, y, num_epochs=10000) # num_epochs=10000
plt.figure(figsize=(10, 8))
plt.plot(Loss_values)
plt.title('Функция потерь', fontsize=15)
plt.xlabel('номер итерации', fontsize=14)
plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14)
plt.show()
```
**(для теста) Проверка лосса:**
Выведите сумму первых пяти и последних пяти значений loss'а при обучении с num_epochs=10000, округлите до 4-го знака после запятой:
Посмотрим, как предсказывает этот нейрон:
```
plt.figure(figsize=(10, 8))
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5).ravel(), cmap='spring')
plt.title('Яблоки и груши', fontsize=15)
plt.xlabel('симметричность', fontsize=14)
plt.ylabel('желтизна', fontsize=14)
plt.show();
```
**Плюсы ELU:**
* дифференцируемая (с доопределнием в нуле)
* нет проблемы затухающих градиентов, как в сигмоиде
* нет проблемы "мёртвых нейронов", как в ReLU
* более устойчива к "шуму" (см. лекции Стэнфорда)
**Возможные минусы ELU:**
* не очень хорошо центрирована около 0 (может мешать скорости сходимсти)
* вычислительно дольше, чем ReLU и LeakyReLU
---
И напоследок -- все покемоны (почти):
<img src="http://cdn-images-1.medium.com/max/1600/1*DRKBmIlr7JowhSbqL6wngg.png">
Не хватает `SeLU()` и `Swish()`. Про них можно прочитать здесь: [SeLU](https://arxiv.org/pdf/1706.02515.pdf), [Swish](https://arxiv.org/pdf/1710.05941.pdf).
`Tanh()` (тангенс гиперболический) используется в основном в рекуррентных нейросетях, а `Maxout()` мы решили не рассматривать (так как, опять же, нами не было замечено, что он часто используется, однако про него ходят хорошие слухи).
---
Думаете, это все функции активации? Нет, ведь за функцию активации можно взять вообще почти любую дифференцируемую функцию (которая, как вы полагаете, будет помогать обучению). Ещё больше функций активации вы можете [найти на википедии](https://en.wikipedia.org/wiki/Activation_function).
<h3 style="text-align: center;"><b>Полезные ссылки</b></h3>
0). Обязательно прочитайте (если вам позволяет английский) эту статью от Стэнфорда: http://cs231n.github.io/neural-networks-1/
1). Хорошая статья про функции активации: https://www.jeremyjordan.me/neural-networks-activation-functions/
2). [Видео от Siraj Raval](https://www.youtube.com/watch?v=-7scQpLossT7uo)
3). Современная статья про функции активации. Теперь на хайпе активация $swish(x) = x\sigma (\beta x)$: https://arxiv.org/pdf/1710.05941.pdf (кстати, при её поиске в некоторой степени использовался neural architecture search)
4). SeLU имеет очень интересные, доказанные с помощью теории вероятностей свойства: https://arxiv.org/pdf/1706.02515.pdf (да, в этой статье 102 страницы)
5). [Список функций активации из википедии](https://en.wikipedia.org/wiki/Activation_function)
| github_jupyter |
# Kalman Filters
In this lab you will:
- Estimate Moving Average
- Use Kalman Filters to calculate the mean and covariance of our time series
- Modify a Pairs trading function to make use of Kalman Filters
## What is a Kalman Filter?
The Kalman filter is an algorithm that uses noisy observations of a system over time to estimate the parameters of the system (some of which are unobservable) and predict future observations. At each time step, it makes a prediction, takes in a measurement, and updates itself based on how the prediction and measurement compare.
The algorithm is as follows:
1. Take as input a mathematical model of the system, i.e.
* the transition matrix, which tells us how the system evolves from one state to another. For instance, if we are modeling the movement of a car, then the next values of position and velocity can be computed from the previous ones using kinematic equations. Alternatively, if we have a system which is fairly stable, we might model its evolution as a random walk. If you want to read up on Kalman filters, note that this matrix is usually called $A$.
* the observation matrix, which tells us the next measurement we should expect given the predicted next state. If we are measuring the position of the car, we just extract the position values stored in the state. For a more complex example, consider estimating a linear regression model for the data. Then our state is the coefficients of the model, and we can predict the next measurement from the linear equation. This is denoted $H$.
* any control factors that affect the state transitions but are not part of the measurements. For instance, if our car were falling, gravity would be a control factor. If the noise does not have mean 0, it should be shifted over and the offset put into the control factors. The control factors are summarized in a matrix $B$ with time-varying control vector $u_t$, which give the offset $Bu_t$.
* covariance matrices of the transition noise (i.e. noise in the evolution of the system) and measurement noise, denoted $Q$ and $R$, respectively.
2. Take as input an initial estimate of the state of the system and the error of the estimate, $\mu_0$ and $\sigma_0$.
3. At each timestep:
* estimate the current state of the system $x_t$ using the transition matrix
* take as input new measurements $z_t$
* use the conditional probability of the measurements given the state, taking into account the uncertainties of the measurement and the state estimate, to update the estimated current state of the system $x_t$ and the covariance matrix of the estimate $P_t$
[This graphic](https://upload.wikimedia.org/wikipedia/commons/a/a5/Basic_concept_of_Kalman_filtering.svg) illustrates the procedure followed by the algorithm.
It's very important for the algorithm to keep track of the covariances of its estimates. This way, it can give us a more nuanced result than simply a point value when we ask for it, and it can use its confidence to decide how much to be influenced by new measurements during the update process. The more certain it is of its estimate of the state, the more skeptical it will be of measurements that disagree with the state.
By default, the errors are assumed to be normally distributed, and this assumption allows the algorithm to calculate precise confidence intervals. It can, however, be implemented for non-normal errors.
## Install dependencies
```
!pip install pykalman
!pip install qq-training-wheels auquan_toolbox --upgrade
# Import a Kalman filter and other useful libraries
from pykalman import KalmanFilter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import poly1d
from backtester.dataSource.yahoo_data_source import YahooStockDataSource
from datetime import datetime
```
# Toy example: falling ball
Imagine we have a falling ball whose motion we are tracking with a camera. The state of the ball consists of its position and velocity. We know that we have the relationship $x_t = x_{t-1} + v_{t-1}\tau - \frac{1}{2} g \tau^2$, where $\tau$ is the time (in seconds) elapsed between $t-1$ and $t$ and $g$ is gravitational acceleration. Meanwhile, our camera can tell us the position of the ball every second, but we know from the manufacturer that the camera accuracy, translated into the position of the ball, implies variance in the position estimate of about 3 meters.
In order to use a Kalman filter, we need to give it transition and observation matrices, transition and observation covariance matrices, and the initial state. The state of the system is (position, velocity), so it follows the transition matrix
$$ \left( \begin{array}{cc}
1 & \tau \\
0 & 1 \end{array} \right) $$
with offset $(-\tau^2 \cdot g/2, -\tau\cdot g)$. The observation matrix just extracts the position coordinate, (1 0), since we are measuring position. We know that the observation variance is 1, and transition covariance is 0 since we will be simulating the data the same way we specified our model. For the initial state, let's feed our model something bogus like (30, 10) and see how our system evolves.
```
tau = 0.1
# Set up the filter
kf = KalmanFilter(n_dim_obs=1, n_dim_state=2, # position is 1-dimensional, (x,v) is 2-dimensional
initial_state_mean=[30,10],
initial_state_covariance=np.eye(2),
transition_matrices=[[1,tau], [0,1]],
observation_matrices=[[1,0]],
observation_covariance=3,
transition_covariance=np.zeros((2,2)),
transition_offsets=[-4.9*tau**2, -9.8*tau])
# Create a simulation of a ball falling for 40 units of time (each of length tau)
times = np.arange(40)
actual = -4.9*tau**2*times**2
# Simulate the noisy camera data
sim = actual + 3*np.random.randn(40)
# Run filter on camera data
state_means, state_covs = kf.filter(sim)
plt.figure(figsize=(15,7))
plt.plot(times, state_means[:,0])
plt.plot(times, sim)
plt.plot(times, actual)
plt.legend(['Filter estimate', 'Camera data', 'Actual'])
plt.xlabel('Time')
plt.ylabel('Height');
```
At each point in time we plot the state estimate <i>after</i> accounting for the most recent measurement, which is why we are not at position 30 at time 0. The filter's attentiveness to the measurements allows it to correct for the initial bogus state we gave it. Then, by weighing its model and knowledge of the physical laws against new measurements, it is able to filter out much of the noise in the camera data. Meanwhile the confidence in the estimate increases with time, as shown by the graph below:
```
# Plot variances of x and v, extracting the appropriate values from the covariance matrix
plt.figure(figsize=(15,7))
plt.plot(times, state_covs[:,0,0])
plt.plot(times, state_covs[:,1,1])
plt.legend(['Var(x)', 'Var(v)'])
plt.ylabel('Variance')
plt.xlabel('Time');
```
The Kalman filter can also do <i>smoothing</i>, which takes in all of the input data at once and then constructs its best guess for the state of the system in each period post factum. That is, it does not provide online, running estimates, but instead uses all of the data to estimate the historical state, which is useful if we only want to use the data after we have collected all of it.
```
# Use smoothing to estimate what the state of the system has been
smoothed_state_means, _ = kf.smooth(sim)
# Plot results
plt.figure(figsize=(15,7))
plt.plot(times, smoothed_state_means[:,0])
plt.plot(times, sim)
plt.plot(times, actual)
plt.legend(['Smoothed estimate', 'Camera data', 'Actual'])
plt.xlabel('Time')
plt.ylabel('Height');
```
# Example: Estimating Moving Average
Because the Kalman filter updates its estimates at every time step and tends to weigh recent observations more than older ones, it can be used to estimate rolling parameters of the data. When using a Kalman filter, there's no window length that we need to specify. This is useful for computing the moving average or for smoothing out estimates of other quantities.
Below, we'll use both a Kalman filter and an n-day moving average to estimate the rolling mean of a dataset. We construct the inputs to the Kalman filter as follows:
* The mean is the model's guess for the mean of the distribution from which measurements are drawn. This means our prediction of the next value is equal to our estimate of the mean.
* Hopefully the mean describes our observations well, hence it shouldn't change significantly when we add an observation. This implies we can assume that it evolves as a random walk with a small error term. We set the transition matrix to 1 and transition covariance matrix is a small number.
* We assume that the observations have variance 1 around the rolling mean (1 is chosen randomly).
* Our initial guess for the mean is 0, but the filter realizes that that is incorrect and adjusts.
```
from pykalman import KalmanFilter
from backtester.dataSource.yahoo_data_source import YahooStockDataSource
# Load pricing data for a security
startDateStr = '2012/12/31'
endDateStr = '2017/12/31'
cachedFolderName = './yahooData/'
dataSetId = 'testPairsTrading'
instrumentIds = ['SPY','MSFT','ADBE']
ds = YahooStockDataSource(cachedFolderName=cachedFolderName,
dataSetId=dataSetId,
instrumentIds=instrumentIds,
startDateStr=startDateStr,
endDateStr=endDateStr,
event='history')
# Get adjusted closing price
data = ds.getBookDataByFeature()['adjClose']
# Data for Adobe
S1 = data['ADBE']
# Data for Microsoft
S2 = data['MSFT']
# Take ratio of the adjusted closing prices
x = S1/S2
# Construct a Kalman filter
kf = KalmanFilter(transition_matrices = [1],
observation_matrices = [1],
initial_state_mean = 0,
initial_state_covariance = 1,
observation_covariance=1,
transition_covariance=.01)
# Use the observed values of the price to get a rolling mean
state_means, _ = kf.filter(x.values)
state_means = pd.Series(state_means.flatten(), index=x.index)
# Compute the rolling mean with various lookback windows
mean30 = x.rolling(window = 10).mean()
mean60 = x.rolling(window = 30).mean()
mean90 = x.rolling(window = 60).mean()
# Plot original data and estimated mean
plt.figure(figsize=(15,7))
plt.plot(state_means[60:], '-b', lw=2, )
plt.plot(x[60:],'-g',lw=1.5)
plt.plot(mean30[60:], 'm', lw=1)
plt.plot(mean60[60:], 'y', lw=1)
plt.plot(mean90[60:], 'c', lw=1)
plt.title('Kalman filter estimate of average')
plt.legend(['Kalman Estimate', 'X', '30-day Moving Average', '60-day Moving Average','90-day Moving Average'])
plt.xlabel('Day')
plt.ylabel('Price');
```
### Observations
As you can see, the estimate from Kalman Filter is usually somewhere between day 30 and day 60 moving average. This could be because the Filter updates its knowledge of the world based on the most recent data. The advantage of the Kalman filter is that we don't need to select a window length. It makes predictions based on the underlying model (that we set parameters for) and the data itself. We do open ourselves up to overfitting with some of the initialization parameters for the filter, but those are slightly easier to objectively define. There's no free lunch and we can't eliminate overfitting, but a Kalman Filter is more rigorous than a moving average and generally better.
Another interesting application of Kalman Filters, Beta Estimation for Linear Regression can be found here [Dr. Aidan O'Mahony's blog.](http://www.thealgoengineer.com/2014/online_linear_regression_kalman_filter/)
We'll be using Kalman filters for Pairs trading the subsequent notebook. Make sure you try to run the examples given here with various hyperparameters for the underlying Kalman filter model to get comfortable with the same and developing a better understanding in the process. For example you can try out the following:
1. Use multi dimensional transition matrices so as to use more of past information for making predictions at each point
2. Try different values of observation and transition covariance
## Example: Pairs Trading
In the previous lab we made use of 60 day window for calculating mean and standard deviation of our time series. Now we'll be replacing that with Kalman filters. Let's use the Kalman Filters to calculate the mean and covariance of our time series
### Let's get the same data that we used in the previous notebook
```
startDateStr = '2007/12/01'
endDateStr = '2017/12/01'
cachedFolderName = 'yahooData/'
dataSetId = 'testPairsTrading2'
instrumentIds = ['ADBE','MSFT']
ds = YahooStockDataSource(cachedFolderName=cachedFolderName,
dataSetId=dataSetId,
instrumentIds=instrumentIds,
startDateStr=startDateStr,
endDateStr=endDateStr,
event='history')
data = ds.getBookDataByFeature()['adjClose']
```
### A quick visualization of error and standard deviations
```
S1, S2 = data['ADBE'].iloc[:1762], data['MSFT'].iloc[:1762]
ratios = S1/S2
kf = KalmanFilter(transition_matrices = [1],
observation_matrices = [1],
initial_state_mean = 0,
initial_state_covariance = 1,
observation_covariance=1,
transition_covariance=.0001)
state_means, state_cov = kf.filter(ratios.values)
state_means, state_std = state_means.squeeze(), np.std(state_cov.squeeze())
plt.figure(figsize=(15,7))
plt.plot(ratios.values - state_means, 'm', lw=1)
plt.plot(np.sqrt(state_cov.squeeze()), 'y', lw=1)
plt.plot(-np.sqrt(state_cov.squeeze()), 'c', lw=1)
plt.title('Kalman filter estimate')
plt.legend(['Error: real_value - mean', 'std', '-std'])
plt.xlabel('Day')
plt.ylabel('Value');
```
We'll be using the z score in the same way as before. Our strategy is to go long or short only in the areas where the |error| is greater than one standard deviation. Since 1 day price could be noisy, we'll be using 5 day average for a particular day's price
#### Exercise: modify our Pairs trading function from the previous notebook to make use of Kalman Filter while keeping the same logic for carrying out trades
```
def trade(S1, S2):
ratios = S1/S2
##TODO: Get the state_mean and state_std using Kalman Filter, consult the above cells for reference
state_means, state_cov = None, None
if state_means is None or state_cov is None:
raise NotImplementedError
window = 5
ma = ratios.rolling(window=window,
center=False).mean()
zscore = (ma - state_means)/state_std
# Simulate trading
# Start with no money and no positions
money = 0
countS1 = 0
countS2 = 0
for i in range(len(ratios)):
# Sell short if the z-score is > 1
if zscore[i] > 1:
money += S1[i] - S2[i] * ratios[i]
countS1 -= 1
countS2 += ratios[i]
# Buy long if the z-score is < 1
elif zscore[i] < -1:
money -= S1[i] - S2[i] * ratios[i]
countS1 += 1
countS2 -= ratios[i]
# Clear positions if the z-score between -.5 and .5
elif abs(zscore[i]) < 0.5:
money += countS1*S1[i] + S2[i] * countS2
countS1 = 0
countS2 = 0
# print('Z-score: '+ str(zscore[i]), countS1, countS2, S1[i] , S2[i])
return money
## Run the function to trade
trade(data['ADBE'].iloc[:1762], data['MSFT'].iloc[:1762])
```
### Observations
1. Is your strategy profitable?
2. You can try changing the hyperparameters of the Kalman Filter and see how it affects the PnL.
3. The results might not be always better than computing statistics over a moving window.
| github_jupyter |
# GitHub Issue [#6](https://github.com/sassoftware/sasoptpy/issues/6)
```
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
import pandas as pd
import saspy
s = saspy.SASsession(cfgname='winlocal')
import sasoptpy as so
model = so.Model(name="Test Model", session=s)
x_data = pd.DataFrame([['x1',2],['x2',3],['x3',4]],columns=['name','value']).set_index(['name'])
XS=x_data.index.tolist()
value=x_data['value']
x=model.add_variables(XS, vartype=so.CONT, lb=0, name='x')
model.add_constraints((x[i] <= 5*value[i] for i in XS ),name='c010')
model.add_constraints((x[i] >= 20 for i in XS ),name='c020')
Total = so.quick_sum ( value[i]*x[i] for i in XS )
model.set_objective(Total, sense=so.MAX, name='TotalValue')
optmodeldata = model.to_optmodel()
print(optmodeldata)
status=model.solve(limit_names=True)
print(status)
model = so.Model(name="Test Model", session=s)
x_data = pd.DataFrame([['x-------------------------------1',2],['x-2',3],['x-3',4]],columns=['name','value']).set_index(['name'])
XS=x_data.index.tolist()
value=x_data['value']
x=model.add_variables(XS, vartype=so.CONT, lb=0, name='x')
model.add_constraints((x[i] <= 5*value[i] for i in XS ),name='c010')
model.add_constraints((x[i] >= 20 for i in XS ),name='c020')
Total = so.quick_sum ( value[i]*x[i] for i in XS )
model.set_objective(Total, sense=so.MAX, name='TotalValue')
optmodeldata = model.to_optmodel()
print(optmodeldata)
status=model.solve(limit_names=True)
print(status)
status=model.solve(limit_names=True,options={'with':'lp', 'IIS':True})
from sasoptpy.interface import SASMediator
sm = SASMediator(model, s)
sm.solve(limit_names=True,options={'with':'lp', 'IIS':True})
sm.conversion
from sasoptpy.abstract import LiteralStatement as ls
model = so.Model(name="Test Model", session=s)
x_data = pd.DataFrame([['x-------------------------------1',2],['x-2',3],['x-3',4]],columns=['name','value']).set_index(['name'])
XS=x_data.index.tolist()
value=x_data['value']
x=model.add_variables(XS, vartype=so.CONT, lb=0, name='x')
model.add_constraints((x[i] <= 5*value[i] for i in XS ),name='c010')
model.add_constraints((x[i] >= 20 for i in XS ),name='c020')
Total = so.quick_sum ( value[i]*x[i] for i in XS )
model.set_objective(Total, sense=so.MAX, name='TotalValue')
model.add_statement(ls("ods output expand=iis_output;"))
model.add_postsolve_statement(ls("expand / IIS;"))
model.add_postsolve_statement(ls("create data con_status from [j] = {1.._NCON_} con=_CON_.name value=_CON_.body status=_CON_.status;"))
sm = SASMediator(model, s)
sm.solve(limit_names=True,options={'with':'lp', 'IIS':True}, verbose=True)
sm.conversion
s.list_tables('WORK')
s.sd2df('IIS_OUTPUT')
s.sd2df('CON_STATUS')
```
| github_jupyter |
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
$ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
<font style="font-size:28px;" align="left"><b> Exercises for Quantum Correlation </b></font>
<br>
_prepared by Abuzer Yakaryilmaz_
<br><br>
Run the following cell to open the exercises.
<i><a href="https://www.mathjax.org" target="_blank">MathJax</a> is used to express mathematical expressions and it requires internet connection.</i>
<hr>
```
import os, webbrowser
webbrowser.open(os.path.abspath("Exercises_Quantum_Correlation.html"))
```
| github_jupyter |
```
from IPython.display import display, HTML
import pandas as pd
from os import listdir
from os.path import isfile, join
from pprint import pprint
import json
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import gridspec
from matplotlib.font_manager import FontProperties
import numpy as np
sns.set(style="ticks")
plt.rcParams['axes.facecolor']='white'
task_order = ['Length', 'WordContent', 'Depth', 'TopConstituents', 'BigramShift', 'Tense', 'SubjNumber', 'ObjNumber', 'OddManOut', 'CoordinationInversion']
model_order = ['bert-base-uncased', 'bert-large-uncased', 'openai-gpt', 'gpt2', 'transfo-xl-wt103']
dict_task = {0:'Length', 1:'WordContent', 2:'Depth', 3:'TopConstituents', 4:'BigramShift', 5:'Tense', 6:'SubjNumber', 7:'ObjNumber', 8:'OddManOut', 9:'CoordinationInversion'}
def get_results(dir_path='./results/mlp_results'):
columns = ['data_path', 'cache_path', 'result_path', 'batch_size', 'cbatch_size', 'nhid', 'optim', 'kfold', 'tenacity', 'usepytorch', 'epoch_size', 'device']
filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f]
list_result = []
for filename in filenames:
with open(join(dir_path, filename), 'r') as infile:
# print(filename)
results = json.load(infile)
for key, result in results.items():
list_result.append(result)
df = pd.DataFrame(list_result)[['acc', 'head', 'layer', 'task', 'model_name']]
for column in columns:
try:
df = df.drop(columns=column)
except:
pass
return df
def get_multi_head_results(dir_path='./top_head_wise_results'):
columns = ['data_path', 'cache_path', 'result_path', 'batch_size', 'cbatch_size', 'nhid', 'optim', 'kfold', 'tenacity', 'usepytorch', 'epoch_size', 'device']
filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f]
list_result = []
for filename in filenames:
with open(join(dir_path, filename), 'r') as infile:
# print(filename)
results = json.load(infile)
for key, result in results.items():
list_result.append(result)
df = pd.DataFrame(list_result)[['acc', 'num_head', 'task', 'model_name']]
for column in columns:
try:
df = df.drop(columns=column)
except:
pass
return df
# Find last layer performance
result_dir_path = '../../results'
df = get_results(dir_path=join(result_dir_path, 'linear_results'))
df = df.loc[df['head'] == -1]
df_base = df.loc[(df['layer'] == 11) & (df['model_name'] == 'bert-base-uncased')]
df_large = df.loc[(df['layer'] == 23) & (df['model_name'] == 'bert-large-uncased')]
df_gpt = df.loc[(df['layer'] == 11) & (df['model_name'] == 'openai-gpt')]
df_gpt2 = df.loc[(df['layer'] == 11) & (df['model_name'] == 'gpt2')]
df_xl = df.loc[(df['layer'] == 17) & (df['model_name'] == 'transfo-xl-wt103')]
df_last_linear = pd.concat([df_base, df_large, df_gpt, df_gpt2, df_xl])
df_last_linear = df_last_linear.set_index(['task', 'model_name'])
df_last_linear = df_last_linear.sort_index()
df_last_linear['last_linear_layer'] = df_last_linear['acc']
df_last_linear = df_last_linear.drop(columns=['acc']).round(1)
# Find best layer performance
df = get_results(dir_path=join(result_dir_path, 'linear_results'))
df = df.loc[df['head'] == -1]
df = pd.DataFrame(df.groupby(['task', 'model_name'])['acc'].max())
df['best_linear_layer'] = df['acc']
df_best_linear = df.drop(columns=['acc'])
# display(df)
df_last_linear = pd.concat([df_base, df_large, df_gpt, df_gpt2, df_xl])
df_last_linear = df_last_linear.set_index(['task', 'model_name'])
df_last_linear = df_last_linear.sort_index()
df_last_linear['last_linear_layer'] = df_last_linear['acc']
df_last_linear = df_last_linear.drop(columns=['acc'])
# Find top n head performance
df = get_multi_head_results(dir_path=join(result_dir_path, './top_head_wise_results'))
df_base = df.loc[(df['num_head'] == 12) & (df['model_name'] == 'bert-base-uncased')]
df_large = df.loc[(df['num_head'] == 16) & (df['model_name'] == 'bert-large-uncased')]
df_gpt = df.loc[(df['num_head'] == 12) & (df['model_name'] == 'openai-gpt')]
df_gpt2 = df.loc[(df['num_head'] == 12) & (df['model_name'] == 'gpt2')]
df_xl = df.loc[(df['num_head'] == 16) & (df['model_name'] == 'transfo-xl-wt103')]
df = pd.concat([df_base, df_large, df_gpt, df_gpt2, df_xl])
df = df.set_index(['task', 'model_name'])
df = df.sort_index()
df['top_n_head'] = df['acc']
df_top_n_head = df.drop(columns=['acc'])
result = pd.concat([df_last_linear, df_best_linear, df_top_n_head], axis=1)
result = result.drop(columns=['head', 'layer', 'num_head'])
result['enhancement'] = round((result['top_n_head'] - result['best_linear_layer']) / result['best_linear_layer'] * 100, 2)
result['top_n_head2'] = ''
for i, row in result.iterrows():
result.at[i, 'top_n_head2'] = '{:1.1f} ({:1.1f})'.format(row[2], row[3])
result['top_n_head'] = result['top_n_head2']
result = result.drop(columns=['enhancement', 'top_n_head2'])
# result = result.dropna()
result = result.reindex(task_order, level=0)
result = result.reindex(model_order, level=1)
display(result.round(1))
result.to_csv('embedding_reconstruction.csv')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import io, math, os, sys
from base64 import b64decode
from pathlib import Path
from IPython.core.display import HTML
import matplotlib.pyplot as plt
import numpy as np
import PIL
# Install daltonlens if necessary
try:
from daltonlens import convert, simulate, utils
except ImportError:
!pip install -q daltonlens
from daltonlens import convert, simulate, utils
# Uncomment to get interactive plots.
# %matplotlib notebook
```
# Introduction
Goal is to generate the precomputed matrices / parameters needed by libDaltonLens or DaltonLens desktop.
# Brettel 1997 with sRGB
```
print (utils.array_to_C_decl('LMS_from_linearRGB', convert.LMSModel_sRGB_SmithPokorny75().LMS_from_linearRGB))
print (utils.array_to_C_decl('linearRGB_from_LMS', convert.LMSModel_sRGB_SmithPokorny75().linearRGB_from_LMS))
simulator = simulate.Simulator_Brettel1997(convert.LMSModel_sRGB_SmithPokorny75(), use_white_as_neutral=True)
simulator.dumpPrecomputedValues = True
np.set_printoptions(precision=4, suppress=True)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.PROTAN, severity=1.0)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.DEUTAN, severity=1.0)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.TRITAN, severity=1.0)
print (utils.array_to_C_decl('brettel1997_tritan_normalSepPlaneLMS', simulator.n_sep_plane))
print (utils.array_to_C_decl('brettel1997_tritan_H1', simulator.H1))
print (utils.array_to_C_decl('brettel1997_tritan_H2', simulator.H2))
```
# Viénot 1999 with sRGB
```
simulator = simulate.Simulator_Vienot1999(convert.LMSModel_sRGB_SmithPokorny75())
simulator.dumpPrecomputedValues = True
np.set_printoptions(precision=4, suppress=True)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.PROTAN, severity=1.0)
print (utils.array_to_C_decl('vienot_projection_protan', simulator.lms_projection_matrix))
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.DEUTAN, severity=1.0)
print (utils.array_to_C_decl('vienot_projection_deutan', simulator.lms_projection_matrix))
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.TRITAN, severity=1.0)
print (utils.array_to_C_decl('vienot_projection_tritan', simulator.lms_projection_matrix))
```
# Brettel with Vischeck parameters
```
simulator = simulate.Simulator_Vischeck()
simulator.dumpPrecomputedValues = True
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.PROTAN, severity=1.0)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.DEUTAN, severity=1.0)
simulator.simulate_cvd(np.zeros((1,1,3), dtype=np.uint8), simulate.Deficiency.TRITAN, severity=1.0)
```
| github_jupyter |
# MOEA tutorial
In the previous assignments, we have been using sampling to investigate the uncertainty space and the lever space. However, we can also use optimization algorithms to search through these spaces. Most often, you would use optimization to search through the lever space in order to find promising policies. However, we can also use optimization to search through the uncertainty space in order to find for example a worst case scenario. In this assignment, we are going through the basics of using the optimization functionality of the workbench.
For optimization, the ema_workbench relies on a library called platypus-opt. *platypus-opt* is python package developed by David Hadka (http://platypus.readthedocs.io/en/latest/) for multi-objective optimization. It allows an explicit specification of the problem components (levers, objectives, constraints). The package includes several multi-objective evolutionary algorithms, therefore the users can choose the algorithm they wish to use.
you can use pip to install it:
```
pip install platypus-opt
```
Start with importing the lake model we have used in previous weeks and connecting it to the workbench. However, we need to make one change: for each outcome of interest we need to specify whether we want to maximize or minimize it, we can use the `kind` kwarg for this. `max_P` should be minimized, while all other outcomes are to be maximized. As a further simplification for this tutorial, we are ignoring the inertia objective. We do this by not setting the `kind` kwarg.
```
from lakemodel_function import lake_problem
from ema_workbench import (Model, RealParameter, ScalarOutcome,
MultiprocessingEvaluator, ema_logging,
Constant)
ema_logging.log_to_stderr(ema_logging.INFO)
#instantiate the model
lake_model = Model('lakeproblem', function=lake_problem)
lake_model.time_horizon = 100 # used to specify the number of timesteps
#specify uncertainties
lake_model.uncertainties = [RealParameter('mean', 0.01, 0.05),
RealParameter('stdev', 0.001, 0.005),
RealParameter('b', 0.1, 0.45),
RealParameter('q', 2.0, 4.5),
RealParameter('delta', 0.93, 0.99)]
# set levers, one for each time step
lake_model.levers = [RealParameter(str(i), 0, 0.1) for i in
range(lake_model.time_horizon)] # we use time_horizon here
#specify outcomes
lake_model.outcomes = [ScalarOutcome('max_P', kind=ScalarOutcome.MINIMIZE),
ScalarOutcome('utility', kind=ScalarOutcome.MAXIMIZE),
ScalarOutcome('inertia'),
ScalarOutcome('reliability', kind=ScalarOutcome.MAXIMIZE)]
lake_model.constantcs = [Constant('alpha', 0.41),
Constant('reps', 150)],
```
Instead of using `perform_experiments`, we will be using `optimize`. There is several kwargs that we need to provide, so let's go through all:
* **algorithm**; We can specify which algorithm we want to use. The default is $\epsilon$-NSGA2, a state of the art many-objective evolutionary algorithm. We can use any of the other algorithms that come with platypus-opt, or the GenerationalBorg algorithm that comes with the workbench. For now, we won't change this.
* **nfe**; the number of function evaluations, this is to be determined by analyzing whether the algorithm has converged
* **searchover**; are we optimizing over the uncertainties or the levers? Most often we will be searching over the levers, so we don't generally need to change this.
* **reference**; If we are searching over levers, what values should we assume for the uncertainties? Reference allows us to specify this. If searchover is set to levers, reference should be a `Scenario` or None, while if searchover is uncertainties, reference should be a `Policy` or None. In case of a None, the default values of the underlying model are unchanged
* **constraints**; see below
* **epsilons**; many state of the art MOEA's rely on a epsilon dominance. Basically, a grid is imposed on the objective space, and per grid cell a single solution is maintained. The granularity of the grid is specified through the epsilon values. Epsilon should be a list or array with a length equal to the number of outcomes. Below, we will see what the impact is of changing epsilon values.
* **convergence**; In order to track whether a MOEA has converged to the optimum solutions, we use convergence metrics. The workbench offers epsilon progress and hypervolume as two often used metrics for this. We will explore these below.
let's start with a simple optimization using 5000 nfe, and the 0.25, 0.1, and 0.1 as epsilon values.
```
with MultiprocessingEvaluator(lake_model) as evaluator:
results = evaluator.optimize(nfe=5000, epsilons=[0.25, 0.1, 0.1])
```
Since we are dealing with 3 outcomes of interest, we can still visualize our results in a 3d scatter plot. Alternatively, we can visualize it using a so-called parallel coordinate plot. In a parallel coordinate plot, the dimensions are visualized side by side. A line connecting the dimensions is a single point in the multidimensional space. For more than 3 dimensions, parallel coordiante plots are prefered over 3d scatter plots with additional visual encodings for the other dimensions. The workbench has support for parallel coordinate plots using `ema_workbench.analysis.parcoords`
```
from mpl_toolkits.mplot3d import Axes3D
outcomes = results.loc[:, ['max_P', 'utility', 'reliability']]
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(outcomes.max_P, outcomes.utility, outcomes.reliability)
ax.set_xlabel('max. P')
ax.set_ylabel('utility')
ax.set_zlabel('reliability')
plt.show()
from ema_workbench.analysis import parcoords
limits = parcoords.get_limits(outcomes)
axes = parcoords.ParallelAxes(limits)
axes.plot(outcomes)
# we invert this axis so direction of desirability is the same
axes.invert_axis('max_P')
plt.show()
```
As you can see, the parcoords figure is easier to interpret once you have learned how to read them. We can see a clear tradeoff between max_P and reliability on the one hand, and utility on the other. This is indicated by the crossing lines in between these respective dimensions.
for the remainder of this tutorial, we will be using a four objective formulation of the problem. We add the intertia objective and we want to maximize it.
```
#specify outcomes
lake_model.outcomes = [ScalarOutcome('max_P', kind=ScalarOutcome.MINIMIZE),
ScalarOutcome('utility', kind=ScalarOutcome.MAXIMIZE),
ScalarOutcome('inertia', kind=ScalarOutcome.MAXIMIZE),
ScalarOutcome('reliability', kind=ScalarOutcome.MAXIMIZE)]
```
## Exploring alternative epsilon values
Let's rerun the optimization, but with different epsilon values. Use \[0.5, 0.5, 0.5, 0.5\]
```
with MultiprocessingEvaluator(lake_model) as evaluator:
results = evaluator.optimize(nfe=5000, epsilons=[0.5, 0.5, 0.5, 0.5])
outcomes = results.loc[:, ['max_P', 'utility', 'reliability', 'inertia']]
limits = parcoords.get_limits(outcomes)
axes = parcoords.ParallelAxes(limits)
axes.plot(outcomes)
# we invert this axis so direction of desirability is the same
axes.invert_axis('max_P')
plt.show()
```
We see that by making our epsilons higher, we are coursening the grid, and thus are reducing the number of solutions we find. Let's test this by making our epsilons smaller. We now expect to find more solutions. Let's use \[0.125, 0.05, 0.05\]
```
with MultiprocessingEvaluator(lake_model) as evaluator:
results = evaluator.optimize(nfe=5000, epsilons=[0.125, 0.05, 0.05, 0.05])
outcomes = results.loc[:, ['max_P', 'utility', 'reliability', 'inertia']]
limits = parcoords.get_limits(outcomes)
axes = parcoords.ParallelAxes(limits)
axes.plot(outcomes)
# we invert this axis so direction of desirability is the same
axes.invert_axis('max_P')
plt.show()
```
And as expected, we now have many more solutions. Selecting appropriate epsilon values is tricky. It depends on case specific concerns (with what granularity do we want to find solutions?), as well as runtime considerations. The lower the epsilon values, the more solutions will be maintained in the Pareto set. Given how MOEA's work, this slows down the optimization.
## Assessing convergence
Next to selecting appropriate epsilon values, a second key issue is assessing convergence. In the foregoing, we have been running the MOEA for 5000 function evaluations. Is this sufficient? Has the algorithm converged? We have no idea. So, how can we add convergence assessment?
There exist a variety of metrics for assessing convergence of MOEAs. The workbench supports epsilong progress and hypervolume. Epsilon progress measures how often a solution in a new gridcel of the epsilon gridded output space is found. Early on, solutions in new grid cells are found quite frequently. Once the algorithm starts to converge, progress becomes more difficult and thus epsilon progress starts to stabilize. Hypervolume is a measure for how much of the objective space is covered by a given set of non-dominated solutions. THe higher the hypervolume, the larger the space is that is covered by the space. Again, hypervolume will grow quickly early on and starts to stabilize once the algorithm is converging. For a more elaborate description, have a look at [this blog](https://waterprogramming.wordpress.com/tag/hypervolume/).
Since hypervolume requires specifying the objective space within which we want to calculate the volume, we need to know this space. Sometimes it is known a priori. For example in the lake problem, reliability is scalled between 0 and 1. In contrast, the bounds on max_P are not known up front. To help with this, we can introduce a constraint saying that max_P must be below a particulare threshold.
```
from ema_workbench.em_framework.optimization import (HyperVolume,
EpsilonProgress)
from ema_workbench import Constraint
#specify outcomes
lake_model.outcomes = [ScalarOutcome('max_P', kind=ScalarOutcome.MINIMIZE,
expected_range=(0,5)),
ScalarOutcome('utility', kind=ScalarOutcome.MAXIMIZE,
expected_range=(0,2)),
ScalarOutcome('inertia', kind=ScalarOutcome.MAXIMIZE,
expected_range=(0,1)),
ScalarOutcome('reliability', kind=ScalarOutcome.MAXIMIZE,
expected_range=(0,1))]
convergence_metrics = [HyperVolume.from_outcomes(lake_model.outcomes),
EpsilonProgress()]
constraints = [Constraint("max pollution", outcome_names="max_P",
function=lambda x:max(0, x-5))]
with MultiprocessingEvaluator(lake_model) as evaluator:
results, convergence = evaluator.optimize(nfe=5000, searchover='levers',
epsilons=[0.125, 0.05, 0.01, 0.01],
convergence=convergence_metrics,
constraints=constraints)
fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, figsize=(8,4))
ax1.plot(convergence.nfe, convergence.epsilon_progress)
ax1.set_ylabel('$\epsilon$-progress')
ax2.plot(convergence.nfe, convergence.hypervolume)
ax2.set_ylabel('hypervolume')
ax1.set_xlabel('number of function evaluations')
ax2.set_xlabel('number of function evaluations')
plt.show()
```
If we look at the above plots, we can see that neither hypervolume, nor $\epsilon$-progress has stablized. 5000 function evaluations is clearly not sufficient. Let's go to another extreme: 100.000. What happens in this case?
```
with MultiprocessingEvaluator(lake_model) as evaluator:
results, convergence = evaluator.optimize(nfe=100000, searchover='levers',
epsilons=[0.125, 0.05, 0.01, 0.01],
convergence=convergence_metrics,
constraints=constraints,
x)
fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, figsize=(8,4))
ax1.plot(convergence.nfe, convergence.epsilon_progress)
ax1.set_ylabel('$\epsilon$-progress')
ax2.plot(convergence.nfe, convergence.hypervolume)
ax2.set_ylabel('hypervolume')
ax1.set_xlabel('number of function evaluations')
ax2.set_xlabel('number of function evaluations')
plt.show()
```
The runtime of this analysis has been substantial. Still, looking at the convergen graphs, hypervolume has more or less stablized, while $\epsilon$-progress only starts to stablize. This could be an argument for running the algorithm even longer (say 250.000 nfe). Establising the number of NFE is generally a form of trial and error.
# The role of stochasticity
MOEAs use stochastics in crossover and mutation. Thus, the specific set of results will vary from one run of the algorithm to the next. Analogous to how you deal with stochasticitiy in discrete event models, it is best practice to run an MOEA multiple times using a different random seed. Next, you would combine the results from the different runs into a combined pareto approximate set.
```
all_results = []
with MultiprocessingEvaluator(lake_model) as evaluator:
for rep in range(5):
# 5000 runs is clearly way to low, givent he convergence
# analysis above. this is only for demonstration purposes
results = evaluator.optimize(nfe=5000, searchover='levers',
epsilons=[0.125, 0.05, 0.01, 0.01],
constraints=constraints)
all_results.append(results)
limits = pd.DataFrame([[0,0,0,0],[5,2,1,1]], columns=['max_P', 'utility', 'reliability', 'inertia'])
axes = parcoords.ParallelAxes(limits)
for i, (result, color) in enumerate(zip(all_results, sns.color_palette())):
outcomes = result.loc[:, ['max_P', 'utility', 'reliability', 'inertia']]
axes.plot(outcomes, color=color, label='results {}'.format(i))
# we invert this axis so direction of desirability is the same
axes.invert_axis('max_P')
axes.legend()
plt.show()
```
# Using an alternative optimization algorithm
In this exercise, I recommend to use Platypus with the ε-NSGAII algorithm, since it is shown to outperform many MOEA’s. For other algortihms, see the documentation of Platypus. For a comparison of them, you can have a look at [Reed et al (2013)](http://dx.doi.org/10.1016/j.advwatres.2012.01.005).
```
from ema_workbench.em_framework.optimization import GenerationalBorg
with MultiprocessingEvaluator(lake_model) as evaluator:
results, convergence = evaluator.optimize(nfe=50000, searchover='levers',
epsilons=[0.125, 0.05, 0.01, 0.01],
convergence=convergence_metrics,
constraints=constraints,
algorithm=GenerationalBorg,
logging_freq=50)
fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=True, figsize=(8,4))
ax1.plot(convergence.nfe, convergence.epsilon_progress)
ax1.set_ylabel('$\epsilon$-progress')
ax2.plot(convergence.nfe, convergence.hypervolume)
ax2.set_ylabel('hypervolume')
ax1.set_xlabel('number of function evaluations')
ax2.set_xlabel('number of function evaluations')
plt.show()
```
| github_jupyter |
```
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
sp500_training_complete = pd.read_csv("GSPC.csv")
sp500_training_processed = sp500_training_complete.iloc[:, 4:5].values
scaler = MinMaxScaler(feature_range = (0, 1))
sp500_training_scaled = scaler.fit_transform(sp500_training_processed)
np.array(sp500_training_scaled)[:50]
def method(x,a,b):
return (((b - a)*(x - min(x))) / (max(x) - min(x))) + a
my = method(sp500_training_processed,0,1)
np.array(my)
my == sp500_training_scaled
np.array_equal(my, sp500_training_scaled)
import numpy as np
np.append(np.append(my,sp500_training_scaled, axis=1),my-sp500_training_scaled, axis=1)
my.shape
sp500_training_scaled[50-50:50,0].shape
features_set = []
labels = []
for i in range(50, 322):
features_set.append(sp500_training_scaled[i-50:i, 0])
#print(features_set)
labels.append(sp500_training_scaled[i, 0])
#break
features_set, labels = np.array(features_set), np.array(labels)
features_set
features_set.shape
labels
features_set = np.reshape(features_set, (features_set.shape[0], features_set.shape[1], 1))
features_set.shape
x = features_set[0,:,:]
y = np.array(sp500_training_scaled[:50])
x
np.append(x,y,axis=1)
np.reshape(np.array(sp500_training_scaled), (272,-1))
sp500_training_scaled.shape
a = np.arange(10)
print (a)
print (np.reshape(a,(5,2)))
print (np.reshape(a,(5,-1)))
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape=(features_set.shape[1], 1),unroll=False))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50))
model.add(Dropout(0.2))
model.add(Dense(units = 1))
model.summary()
model.add(Dense(units = 1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
print (features_set.shape)
print (labels.shape)
history = model.fit(features_set, labels, epochs = 100, batch_size = 32)
predictions = model.predict(test_features)
from math import sqrt
from sklearn.metrics import mean_squared_error
predictions = scaler.inverse_transform(predictions)
sqrt(mean_squared_error(sp500_training_processed[51:322], predictions[51:322]))
test_features
sp500_testing_complete['Close'].max() - sp500_training_complete['Close'].min()
def myrescale(x):
maxval = sp500_training_complete['Close'].max()
minval = sp500_training_complete['Close'].min() # same for this, min max should be of the OG dataset.
#print (maxval-minval)
#print (x*(maxval-minval) + minval)
return (np.array(x*(maxval-minval) + minval))
my_rescale = myrescale(predictions)
print(my_rescale)
predictions
sp500_testing_complete = pd.read_csv("GSPC.csv")
sp500_testing_processed = sp500_testing_complete.iloc[:, 4:5].values
sp500_testing_processed.size
sp500_total = pd.concat((sp500_training_complete['Close'], sp500_testing_complete['Close']), axis=0)
test_inputs = sp500_total[len(sp500_total) - len(sp500_testing_complete) - 50:].values
len(test_inputs)
len(sp500_total) - len(sp500_testing_complete) - 50
644 - (len(sp500_total) - len(sp500_testing_complete) - 50)
test_inputs = test_inputs.reshape(-1,1)
test_inputs = scaler.transform(test_inputs)
test_inputs
test_features = []
for i in range(50, 372):
test_features.append(test_inputs[i-50:i, 0])
test_features = np.array(test_features)
test_features = np.reshape(test_features, (test_features.shape[0], test_features.shape[1], 1))
(test_features[19])
features_set
test_features
```
| github_jupyter |
```
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
dataset = load_iris()
X = pd.DataFrame(dataset['data'], columns=dataset['feature_names'])
X
y = pd.DataFrame(dataset['target'])
df = pd.concat([X, y],axis=1)
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'target']
df
def convert(x):
if x == 0:
x = 'setosa'
elif x == 1:
x = 'versicolor'
else:
x = 'virginica'
return x
df['target'] = df['target'].apply(convert)
df
# Scale the independent variables
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
# use Principal Component Analysis for dimenson reduction
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca_X = pca.fit_transform(X)
principalDf = pd.DataFrame(data=pca_X, columns=['PC 1', 'PC 2'])
principalDf
# information ratio from PC 1 and PC 2
pca.explained_variance_ratio_
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize=15)
ax.set_ylabel('Principal Component 2', fontsize=15)
ax.set_title('2 component PCA', fontsize=20)
targets = ['setosa', 'versicolor', 'virginica']
colors = ['r', 'g', 'b']
for target, color in zip(targets,colors):
indicesToKeep = df['target'] == target
ax.scatter(principalDf.loc[indicesToKeep, 'PC 1'],
principalDf.loc[indicesToKeep, 'PC 2'], c=color, s=50)
ax.legend(targets)
ax.grid()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, df['target'],
test_size=0.33, random_state=101)
model = GradientBoostingClassifier(n_estimators=100, max_depth=15, learning_rate=0.1,
verbose=1)
model.fit(X_train, y_train)
prediction = model.predict(X_test)
from sklearn.metrics import confusion_matrix, classification_report
print('Confusion matrix: \n', confusion_matrix(y_test, prediction))
print('\n')
print('Classification report: \n', classification_report(y_test, prediction))
from sklearn.preprocessing import label_binarize
def convert_back(x):
if x == 'setosa':
x = 0
elif x == 'versicolor':
x = 1
else:
x = 2
return x
y_test = y_test.apply(convert_back)
y_test = label_binarize(y_test, classes=[0,1,2])
probs = model.predict_proba(X_test)
print(probs.shape)
print(y_test.shape)
from sklearn.metrics import roc_curve, auc
n_classes = 3
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], probs[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Plot of a ROC curve for a specific class
for i in range(n_classes):
plt.figure()
plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic Curve')
plt.legend(loc="lower right")
plt.show()
```
| github_jupyter |
# Introduction to Overfit and Underfit
### Learning objectives
1. Use the Higgs Dataset.
2. Demonstrate overfitting.
3. Strategies to prevent overfitting.
## Introduction
In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model.
As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).
In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.
In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).
The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the train data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.
If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.
To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.
A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/overfit_and_underfit.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Setup
Before getting started, import the necessary packages:
```
!pip install tensorflow==2.7.0
```
**NOTE**: Please ignore any incompatibility warnings and errors.
**NOTE**: Restart your kernel to use updated packages.
```
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
```
## The Higgs Dataset
The goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11 000 000 examples, each with 28 features, and a binary class label.
```
# TODO
# Downloads a file from a URL if it not already in the cache
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
```
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
```
# A Dataset comprising lines from one or more CSV files.
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
```
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
```
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
```
TensorFlow is most efficient when operating on large batches of data.
So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
```
packed_ds = ds.batch(10000).map(pack_row).unbatch()
```
Have a look at some of the records from this new `packed_ds`.
The features are not perfectly normalized, but this is sufficient for this tutorial.
```
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
```
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
```
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
```
The `Dataset.skip` and `Dataset.take` methods make this easy.
At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
```
# Creates a Dataset with at most count elements from this dataset.
# Creates a Dataset that skips count elements from this dataset.
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
```
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
```
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
```
## Demonstrate overfitting
The simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".
Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.
Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.
On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".
Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.
To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.
Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them.
### Training procedure
Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
```
# TODO
# A LearningRateSchedule that uses an inverse time decay schedule
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
```
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
```
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
```
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.
The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply prints a `.` for each epoch, and a full set of metrics every 100 epochs.
Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.
Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
```
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
```
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
```
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
```
### Tiny model
Start by training a model:
```
# Sequential groups a linear stack of layers into a tf.keras.Model
# TODO
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
```
Now check how the model did:
```
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
```
### Small model
To see if you can beat the performance of the small model, progressively train some larger models.
Try two hidden layers with 16 units each:
```
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
```
### Medium model
Now try 3 hidden layers with 64 units each:
```
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
```
And train the model using the same data:
```
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
```
### Large model
As an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
```
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
```
And, again, train the model using the same data:
```
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
```
### Plot the training and validation losses
The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model).
While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.
In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.
This is apparent if you plot and compare the validation metrics to the training metrics.
* It's normal for there to be a small difference.
* If both metrics are moving in the same direction, everything is fine.
* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.
* If the validation metric is going in the wrong direction, the model is clearly overfitting.
```
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
```
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress.
## Strategies to prevent overfitting
Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
```
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
```
### Add weight regularization
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
* [L1 regularization](https://developers.google.com/machine-learning/glossary/#L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
* [L2 regularization](https://developers.google.com/machine-learning/glossary/#L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights-one reason why L2 is more common.
In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
```
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
```
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.
That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.
So, that same `"Large"` model with an `L2` regularization penalty performs much better:
```
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
```
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters.
#### More info
There are two important things to note about this sort of regularization.
**First:** if you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
```
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
```
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.
There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`.
### Add dropout
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.
The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.
Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,
1.3, 0, 1.1].
The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.
Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
```
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
```
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.
Next try them both, together, and see if that does better.
### Combined L2 + dropout
```
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
# TODO
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
```
This model with the `"Combined"` regularization is obviously the best one so far.
| github_jupyter |
## Keras rl-neural network models
# 1. Model
## Different models built on keras
```
# 1.1 Model
## DESCRIPTION : 6 layered Neural Network with dropout
from keras.models import Sequential
from keras.layers import Dense, Dropout
def create_model_1():
model = Sequential()
model.add(Dense(128, input_shape=(4,), activation='relu')) #Layer1 : 128 cells with relu activation function
model.add(Dropout(0.6))
model.add(Dense(256, activation="relu")) #Layer2 : 256 cells with relu activation function
model.add(Dropout(0.6))
model.add(Dense(512, activation="relu")) #Layer3 : 512 cells with relu activation function
model.add(Dropout(0.6))
model.add(Dense(256, activation="relu")) #Layer4 : 256 cells with relu activation function
model.add(Dropout(0.6))
model.add(Dense(128, activation="relu")) #Layer5 : 128 cells with relu activation function
model.add(Dropout(0.6))
model.add(Dense(2, activation="softmax")) #Layer5 : softmax last layer transformation
model.compile( #Layer6 : configure the learning process
loss="categorical_crossentropy",
optimizer="adam",
metrics=["accuracy"])
print(model.summary())
return model
# 1.2 Model
## DESCRIPTION : dqn_atari
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten, Convolution2D, Permute
def create_model_atari():
model_atari = Sequential()
model_atari.add(Convolution2D(32, 8, 8, subsample=(4, 4))) #Layer1 : convolutional layer 32 batch_size shape (8,8)
model_atari.add(Activation('relu'))
model_atari.add(Convolution2D(64, 4, 4, subsample=(2, 2)))
model_atari.add(Activation('relu'))
model_atari.add(Convolution2D(64, 3, 3, subsample=(1,1)))
model_atari.add(Activation('relu'))
model_atari.add(Flatten())
model_atari.add(Dense(512))
model_atari.add(Dense(nb_actions))
model_atari.add(Activation('linear'))
model.compile( #Layer6 : configure the learning process
loss="categorical_crossentropy",
optimizer="adam",
metrics=["accuracy"])
return model
```
# 2. Policies
## Different policies implemented for keras
#### LinearAnnealedPolicy . kudos to matthiasplappert
Wraps another policy and decreases a given parameter linearly.
(This policy can be used together within EpsGreedyQPolicy to transform eps-value from 1 to 0.1 )
#### EpsGreedyQPolicy
Epsilon greedy policy is a way of selecting random actions with uniform distributions
from a set of available actions. Using this policy either we can select random action
with epsilon probability and we can select an action with 1-epsilon prob that gives maximum
reward in a given state.
As parameters we will select
--epsilon (eps-val) : probability of an event should occur : from 0 to 1 ( makes an exploration-explotation that depends on this metric)
#### GreedyQPolicy
Epsilon greedy policy with epsilon value == 1
#### BolztmanQPolicy
Parameters
--epsilon :
#### MaxBoltzmanQPOlicy https://pure.uva.nl/ws/files/3153478/8461_UBA003000033.pdf
A combination of the eps-greedy and Boltzman q-policy.
#### BolztmannGumbelQPolicy https://arxiv.org/pdf/1705.10257.pdf
BGE is invariant with respect to the mean of the rewards but not their
variance. The parameter C, which defaults to 1, can be used to correct for
this, and should be set to the least upper bound on the standard deviation
of the rewards.
BGE is only available for training, not testing. For testing purposes, you
can achieve approximately the same result as BGE after training for N steps
on K actions with parameter C by using the BoltzmannQPolicy and setting
tau = C/sqrt(N/K).
| github_jupyter |
For this computer lab, we'll be using the IRIS dataset. Initially, we'll only look at a subset of it, and perform linear regression on two features of a given class.
# 1. Loading the data
### 1.1 Import the necessary modules
We'll use these three different modules, and one of the functions from scikit-learn.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
The last line is needed in order to show matplotlib plots in notebooks.
### 1.2 Read the dataset from a .csv file
Load the [IRIS dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set) using Pandas. The method `read_csv()` returns a `DataFrame` object containing the data found in the provided .csv file.
```
dataset = pd.read_csv("iris.csv")
type(dataset)
```
### 1.3 Analyze the dataset
This dataset is comprised of morphologic data from three different species of the Iris flowers: Setosa, Virginica and Versicolor.
<table style="width:100%">
<tr>
<th> <center>Iris Setosa</center> </th>
<th> <center>Iris Virginica</center> </th>
<th> <center>Iris Versicolor</center> </th>
</tr>
<tr>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg" alt="Iris Setosa"></td>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg" alt="Iris Virginica"></td>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/2/27/Blue_Flag%2C_Ottawa.jpg" alt="Iris Virginica"></td>
</tr>
</table>
The lenght and width of both the petals and the sepals of each flower, together with its corresponding species were measured and stored in this dataset. Sepals and petals are both parts of a flower. Sepals are the outermost part of the whorl and the petals are the innermost part.

Let's take a look at what's inside the dataset now. The attribute `shape` of `DataFrame` objects returns the dimensions of the data inside it.
```
dataset.shape
```
So this dataset has 150 rows and 5 columns. It's easy to infer that this means 150 flowers were collected, and 5 different features were registered for each one. But we can also take a closer look at them, using the method `head()`, which returns the first 5 rows by default (you can also pass a parameter to it, which specifies a different amount of rows to be shown).
```
dataset.head()
```
Here we can see the header names for each column, together with the first rows, confirming that the species and morphologic measurements for each flower were collected. We can extract individual columns of this `DataFrame` by indexing using their names, for instance:
```
dataset["sepal_length"]
```
Additionally, we can check which species are present in the dataset using the `unique` method,
```
print(dataset["species"].unique())
```
where we see that only these three species are present in this dataset, as expected.
We can also learn more about the data types of each column with the method `info`.
```
dataset.info()
```
Here we see that the first four columns' elements are floating point numbers, and the last column's elements are objects (in this case, strings).
### 1.4 Extract the desired data
For this initial task, we are only interested in the setosa species. This corresponds to all the rows which have the column 'species' equal to the string 'setosa'. In order to extract these rows, we use [logical indexing in Pandas](https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing).
```
# This returns a boolean series, which we can then use to index our DataFrame object
extract_rule = (dataset['species']=='setosa')
extract_rule
# We use the boolean series to index the DataFrame object
setosa_dataset = dataset[extract_rule]
setosa_dataset
```
Furthermore, we want to investigate the relationship between two features of this species, the 'sepal_length' and 'sepal_width'. To extract these, we [index the `DataFrame` using the name of the columns](https://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-label) we want.
```
x = setosa_dataset['sepal_length'].values
y = setosa_dataset['sepal_width'].values
```
Note that the attribute `values` in a `DataFrame` object returns a numpy array.
```
type(x)
```
Now we can use matplotlib to plot all the examples in a 2D plane, where each dimension is one of the features described earlier.
```
fig, ax = plt.subplots()
ax.scatter(x,y)
ax.set_xlabel('sepal length')
ax.set_ylabel('sepal width');
```
It seems like the relation between these features could be approximated using a linear function, such as
$f(x) = w\cdot x + b$. Let's try finding the parameters $w$ and $b$ that would make the best approximation.
### 1.5 Guess the values of w and b
We'll start with some educated guesses. To make this more convenient, we'll first define a function to plot a scatter plot of the provided data, together with a straight line with parameters specified by the user.
```
# Define a function to plot the data and a parameterized line
def plot_data_and_line(w, b, x, y, ax, line_color='r', line_label=''):
# Create points lying on the line
xline = np.unique(x)
yline = w*xline + b
# Plot both the line and the points from the dataset
ax.scatter(x,y, color='C0')
ax.plot(xline, yline, color=line_color, label=line_label)
ax.set_xlabel('sepal length')
ax.set_ylabel('sepal width')
fig, ax = plt.subplots()
plot_data_and_line(1, -1, x, y, ax)
```
Additionally, another way of evaluating the quality of our approximation is to compute the MSE (mean squared error) between the true y features in the dataset and our predictions. So that we can use this value as well to guide our guesses, create a function to compute it (first, it might be beneficial to write down the analytical expression for it).
```
# Create a function to compute the MSE
# YOUR CODE HERE
raise NotImplementedError()
```
Now we can try different values of $w$ and $b$ and see how the resulting linear approximation looks like, compared to the scatter plot of our data. Using both the plot and the MSE, try searching for values of $w$ and $b$ that yield a good approximation.
```
# Guess the values for w and b
# YOUR CODE HERE
raise NotImplementedError()
# Plot your guess
plt.close('all')
fig, ax = plt.subplots()
plot_data_and_line(w, b, x, y, ax);
# Compute MSE of the guess
y_guess = w*x+b
print("MSE of your guess:", MSE(y,y_guess))
```
---
# 2. Training a model for linear regression
Now, instead of trying to find the parameters that give the best approximation by trial and error, we'll use Keras' framework to build and optimize a linear regressor neural network. Actually, we'll be using Keras and Tensorflow.
Tensorflow is what is called the 'backend' of Keras in this case, taking care of the matrix computations and parallelization necessary to speed up the code. Keras, on the other hand, is the Python package we'll use to interface with Tensorflow, using higher-level abstractions. That is, instead of thinking about the low-level details of tensors and the actual computations involved, we'll be thinking about neurons and network architectures, optimizers, etc.
### 2.1 Importing the necessary modules
We start by loading the necessary modules from Keras.
```
from keras.models import Sequential
from keras.layers import Dense
```
### 2.2 Create the Keras model
The problem of linear regression that we've been tackling so far can be seen as training a neural network consisting of only one neuron, with one weight and one bias. This model can be easily specified and trained in Keras, as we'll show ahead.
In Keras, [models can be of two different types](https://keras.io/models/about-keras-models/#about-keras-models). For this task, it's enough to use the simpler one, the [`sequential`](https://keras.io/models/sequential/) model. The following code initializes such a model and then calls its `add` method in order to add the first (and only) layer. Finally, this calls the [`compile`](https://keras.io/models/sequential/#compile) method, in which we configure the learning process by specifying the loss and optimizer to be used.
```
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='zeros', bias_initializer='zeros'))
model.compile(loss='mean_squared_error', optimizer='SGD')
```
### 2.3 Optimize the model
Now that we have specified the model and how we want to train it, calling the [`fit`](https://keras.io/models/sequential/#fit) method starts the training process.
```
model.fit(x,y,epochs=50);
```
Note the final MSE obtained (~0.066). Compare it to the one obtained using the guessed parameters.
### 2.4 Extract optimal parameters
Although the `fit` method does show us the final obtained MSE, it does not display the optimized parameters. To obtain those, we use the `get_weights` method of the layer we're interested in (in this case, the only layer), which returns the weights of the layer as a numpy array.
```
optimal_w_np, optimal_b_np = model.layers[0].get_weights()
type(optimal_w_np)
```
Right now it's more convenient to have these as floating point numbers instead, so we'll go ahead and convert them.
```
optimal_w = float(optimal_w_np)
optimal_b = float(optimal_b_np)
print("w: %.3f" % optimal_w)
print("b: %.3f" % optimal_b)
```
Compare these optimized parameters with the ones you guessed before.
### 2.5 Compare optimal and guessed values
Finally, it's also beneficial to compare the guessed parameters with the optimized ones graphically, by showing both of the predicted lines in the same plot.
```
plt.close('all')
fig, ax = plt.subplots()
plot_data_and_line(w, b, x, y, ax, 'r', 'guess')
plot_data_and_line(optimal_w, optimal_b, x, y, ax, 'b', 'optimal')
ax.legend();
```
---
# Bonus: Visualizing the optimization
To clearly understand what's going on when we call the Keras `fit` function, it's helpful to illustrate the optimization trajectory. That is, we would like to plot the level curves of the loss function in the parameter space, together with the values of `w` and `b` at each time step of the iteration.
First, we import the [`Callback`](https://github.com/keras-team/keras/blob/master/keras/callbacks.py) class, along with Adam and SGD optimizers.
```
from keras.callbacks import Callback
from keras.optimizers import Adam, SGD
```
Create a callback to save the weights at each iteration.
```
# Create a callback to save the weights at each iteration
class saveWeightsCallback(Callback):
def on_train_begin(self, logs=None):
self.w = [float(self.model.layers[0].get_weights()[0])]
self.b = [float(self.model.layers[0].get_weights()[1])]
def on_epoch_end(self, epoch, logs=None):
self.w.append(float(self.model.layers[0].get_weights()[0]))
self.b.append(float(self.model.layers[0].get_weights()[1]))
```
Now we train the model, passing the newly created callback as argument to the `fit` method.
```
# Create a model to be optimized with SGD
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='zeros', bias_initializer='zeros'))
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01, momentum=0.1))
# Train it
print('Training...')
history = saveWeightsCallback()
model.fit(x,y,epochs=100, callbacks=[history], verbose=0, batch_size=32);
print('Done!')
plt.plot(history.w, history.b, '-ro');
```
We also need to compute the MSE in a grid of points, so that we can plot level curves with `matplotlib` `contour` method.
```
wmin = min(history.w)
wmax = max(history.w)
bmin = min(history.b)
bmax = max(history.b)
# Create grid of values
n = 400
w_range = np.linspace(min(0,wmin),max(2,wmax),n)
b_range = np.linspace(min(-1,bmin), max(1,bmax),n)
w_grid, b_grid = np.meshgrid(w_range, b_range)
# Compute MSE at each one
MSE_grid = np.ndarray((n,n))
for i in range(n):
for j in range(n):
w = w_grid[i,j]
b = b_grid[i,j]
y_ = w*x + b
MSE_grid[i,j] = MSE(y_,y)
```
Now we're ready to plot.
However, before doing that, can you guess how the level curves for this loss should look like? Will they be circles, ellipses, something else? Will they be aligned with the axes? Do we have one or many global optima?
What about the optimization trajectory? How should it look like? What do you expect to see?
```
# Plot the MSE level curves
plt.close('all')
fig, ax = plt.subplots(figsize=(15,10))
p = ax.contour(w_grid, b_grid, MSE_grid, np.linspace(0, 40, 100), cmap='hot')
# Plot the optimization trajectory
ax.plot(history.w, history.b, '-ro')
# Plot the global optimum
optimal = np.polyfit(x,y,1)
ax.plot(optimal[0], optimal[1], 'bx')
# Axis' labels, colorbar
fig.colorbar(p, ax=ax, ticks=np.linspace(0,40,9))
ax.set_xlabel('w')
ax.set_ylabel('b');
```
Try different batch sizes, learning rates, number of epochs, or even a different optimizer, like Adam.
| github_jupyter |
```
from imports import *
from datasets.idd import *
from datasets.bdd import *
from detection.unet import *
from collections import OrderedDict
from torch_cluster import nearest
from fastprogress import master_bar, progress_bar
batch_size=8
num_epochs=1
path = '/home/jupyter/autonue/data'
root_img_path = os.path.join(path,'bdd100k','images','100k')
root_anno_path = os.path.join(path,'bdd100k','labels')
train_img_path = root_img_path+'/train/'
val_img_path = root_img_path+'/val/'
train_anno_json_path = root_anno_path+'/bdd100k_labels_images_train.json'
val_anno_json_path = root_anno_path+'/bdd100k_labels_images_val.json'
print("Loading files")
with open("datalists/bdd100k_train_images_path.txt", "rb") as fp:
train_img_path_list = pickle.load(fp)
with open("datalists/bdd100k_val_images_path.txt", "rb") as fp:
val_img_path_list = pickle.load(fp)
src_dataset = dset = BDD(train_img_path_list,train_anno_json_path,get_transform(train=True))
src_dl = torch.utils.data.DataLoader(src_dataset, batch_size=batch_size, shuffle=True, num_workers=4,collate_fn=utils.collate_fn)
with open("datalists/idd_images_path_list.txt", "rb") as fp:
non_hq_img_paths = pickle.load(fp)
with open("datalists/idd_anno_path_list.txt", "rb") as fp:
non_hq_anno_paths = pickle.load(fp)
with open("datalists/idd_hq_images_path_list.txt", "rb") as fp:
hq_img_paths = pickle.load(fp)
with open("datalists/idd_hq_anno_path_list.txt", "rb") as fp:
hq_anno_paths = pickle.load(fp)
trgt_images = hq_img_paths #non_hq_img_paths #
trgt_annos = hq_anno_paths #non_hq_anno_paths #hq_anno_paths +
trgt_dataset = IDD(trgt_images,trgt_annos,get_transform(train=True))
trgt_dl = torch.utils.data.DataLoader(trgt_dataset, batch_size=batch_size, shuffle=True, num_workers=4,collate_fn=utils.collate_fn)
#src_dataset[0][0].shape,trgt_dataset[0][0].shape
class TransportBlock(nn.Module):
def __init__(self,backbone,n_channels=256,batch_size=2):
super(TransportBlock, self).__init__()
self.backbone = backbone.cuda()
self.stats = [0.485, 0.456, 0.406],[0.229, 0.224, 0.225]
self.batch_size=2
self.unet = Unet(n_channels).cuda()
for name,p in self.backbone.named_parameters():
p.requires_grad=False
def unet_forward(self,x):
return self.unet(x)
def transport_loss(self,S_embeddings, T_embeddings, N_cluster=5):
Loss = 0.
for batch in range(self.batch_size):
S_embeddings = S_embeddings[batch].view(256,-1)
T_embeddings = T_embeddings[batch].view(256,-1)
N_random_vec = S_embeddings[np.random.choice(S_embeddings.shape[0], N_cluster)]
cluster_labels = nearest(S_embeddings, N_random_vec)
cluster_centroids = torch.cat([torch.mean(S_embeddings[cluster_labels == label], dim=0).unsqueeze(0) for label in cluster_labels])
Target_labels = nearest(T_embeddings, cluster_centroids)
target_centroids = []
for label in cluster_labels:
if label in Target_labels:
target_centroids.append(torch.mean(T_embeddings[Target_labels == label], dim=0))
else:
target_centroids.append(cluster_centroids[label])
target_centroids = torch.cat(target_centroids)
dist = lambda x,y: torch.mean((x -y)**2)
intra_class_variance = torch.cat([dist(T_embeddings[Target_labels[label]], target_centroids[label]).unsqueeze(0) for label in cluster_labels])
centroid_distance = torch.cat([dist(target_centroids[label], cluster_centroids[label]).unsqueeze(0) for label in cluster_labels])
Loss += torch.mean(centroid_distance*intra_class_variance) # similar to earth mover distance
return Loss
def get_model(num_classes):
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True).cpu()
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = torchvision.models.detection.faster_rcnn.FastRCNNPredictor(in_features, num_classes).cpu() # replace the pre-trained head with a new one
return model.cpu()
ckpt = torch.load('saved_models/bdd100k_24.pth')
model = get_model(12)
model.load_state_dict(torch.load('saved_models/bdd100k_24.pth')['model'])
ot = TransportBlock(model.backbone)
params = [p for p in ot.unet.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=1e-3,momentum=0.9, weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer,base_lr=1e-3,max_lr=6e-3)
from detection import transform
transform = transform.GeneralizedRCNNTransform(min_size=800, max_size=1333, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225])
transform.eval()
mb = master_bar(range(num_epochs))
for i in mb:
for trgt_img, _ in progress_bar(trgt_dl,parent=mb):
src_img, _ = next(iter(src_dl))
src_images = list(image.cuda() for image in src_img)
trgt_images = list(image.cuda() for image in trgt_img)
src_images, _ = transform(src_images, None)
src_features = ot.backbone(src_images.tensors)[0]
trgt_images, _ = transform(trgt_images, None)
trgt_features = ot.backbone(trgt_images.tensors)[0]
torch.save(src_features,'src_features.pth')
torch.save(trgt_features,'trgt_features.pth')
modified_trgt_features = ot.unet_forward(trgt_features)
torch.save(modified_trgt_features,'modified_trgt_features.pth')
break
#print(src_features.shape,modified_trgt_features.shape)
# pad if dim of feature maps are not same
if src_features.shape!=modified_trgt_features.shape:
print("Earlier", src_features.shape,modified_trgt_features.shape)
print("Fixing")
if src_features.size(3)<336:
src_features = F.pad(src_features,(336-src_features.size(3),0,0,0)).contiguous()
if modified_trgt_features.size(3)>192:
modified_trgt_features = F.pad(modified_trgt_features,(0,0,192-modified_trgt_features.size(2),0)).contiguous()
if modified_trgt_features.size(3)<336:
modified_trgt_features = F.pad(modified_trgt_features,(336-modified_trgt_features.size(3),0,0,0)).contiguous()
############################################################
#print("Now", src_features.shape,modified_trgt_features.shape)
assert src_features.shape==modified_trgt_features.shape
loss = ot.transport_loss(src_features,modified_trgt_features)
print ("transport_loss: ",loss.item(),"lr: ", optimizer.param_groups[0]["lr"])
optimizer.zero_grad()
loss.backward()
optimizer.step()
lr_scheduler.step()
del src_images,trgt_images,src_features,trgt_features,_
break
torch.save({
'model_state_dict': ot.unet.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
}, 'saved_models/unet.pth')
```
| github_jupyter |
```
!echo "Late updated:" `date`
```
Resources for learning TFP
- https://www.tensorflow.org/probability/api_docs/python/tfp/mcmc/NoUTurnSampler
- https://www.tensorflow.org/probability/overview
- https://www.tensorflow.org/probability/api_docs/python/tfp/mcmc
- https://www.tensorflow.org/probability/examples/Modeling_with_JointDistribution
- https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model
To better understand event_shape, batch_shape, sample_shape:
- https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Distribution
- https://www.youtube.com/watch?v=zWXTpZX4PPo
```
import json
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
from tensorflow_probability import bijectors as tfb
# Default data type for tensorflow tensors.
dtype = tf.float64
# Set random seeds for reproducibility
np.random.seed(1)
tf.random.set_seed(1)
# Thanks to Dave Moore for extending this to work with batch dimensions!
# This turns out to be necessary for ADVI to work properly.
def stickbreak(v):
batch_ndims = len(v.shape) - 1
cumprod_one_minus_v = tf.math.cumprod(1 - v, axis=-1)
one_v = tf.pad(v, [[0, 0]] * batch_ndims + [[0, 1]], "CONSTANT", constant_values=1)
c_one = tf.pad(cumprod_one_minus_v, [[0, 0]] * batch_ndims + [[1, 0]], "CONSTANT", constant_values=1)
return one_v * c_one
# Example:
# stickbreak(np.random.rand(2, 3)) # Last dimension is the number of sticks breaks.
# Returns a tensor of shape(2, 4).
#
# stickbreak(np.random.rand(2, 3, 4)) # Last dimension is the number of sticks breaks.
# Returns a tensor of shape(2, 3, 5).
#
# The last dimensions sum to 1.
# stickbreak(np.random.randn(2,3,4)).numpy().sum(-1)
# See: https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/MixtureSameFamily
# See: https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model
def create_dp_sb_gmm(nobs, K, dtype=np.float64):
return tfd.JointDistributionNamed(dict(
# Mixture means
mu = tfd.Independent(
tfd.Normal(np.zeros(K, dtype), 3),
reinterpreted_batch_ndims=1
),
# Mixture scales
sigma = tfd.Independent(
tfd.LogNormal(loc=np.full(K, - 2, dtype), scale=0.5),
reinterpreted_batch_ndims=1
),
# Mixture weights (stick-breaking construction)
alpha = tfd.Gamma(concentration=np.float64(1.0), rate=10.0),
v = lambda alpha: tfd.Independent(
# NOTE: Dave Moore suggests doing this instead, to ensure
# that a batch dimension in alpha doesn't conflict with
# the other parameters.
tfd.Beta(np.ones(K - 1, dtype), alpha[..., tf.newaxis]),
reinterpreted_batch_ndims=1
),
# Observations (likelihood)
obs = lambda mu, sigma, v: tfd.Sample(tfd.MixtureSameFamily(
# This will be marginalized over.
mixture_distribution=tfd.Categorical(probs=stickbreak(v)),
components_distribution=tfd.Normal(mu, sigma)),
sample_shape=nobs)
))
# Example usages:
# dp_sb_gmm = create_dp_sb_gmm(13, 5)
# sample = dp_sb_gmm.sample()
# dp_sb_gmm.log_prob(**sample)
# print(dp_sb_gmm.resolve_graph())
# print(sample)
#
# dp_sb_gmm.log_prob(mu=tfd.Normal(np.float64(0), 1).sample(5),
# sigma=tfd.Uniform(np.float64(0), 1).sample(5),
# alpha=tf.cast(1, dtype),
# v=tfd.Beta(np.float64(1), 1).sample(5 - 1),
# obs=np.random.randn(1000))
# Make sure that `model.sample()` AND `model.sample(N)` works for N > 1!
# create_dp_sb_gmm(13, 5).sample()
# create_dp_sb_gmm(13, 5).sample((1,2))
# Read simulated data.
path_to_data = '../data/gmm-data-n200.json'
with open(path_to_data) as f:
simdata = json.load(f)
# Give data the correct type.
y = np.array(simdata['y'], dtype=np.float64)
# Plot histogram of data.
plt.hist(y, density=True, bins=30)
plt.xlabel('data (y)')
plt.ylabel('density')
plt.title('Histogram of data');
# Helper for plotting posterior distribution of a given parameter.
def plot_param_post(param, param_name, param_full_name, level=95, figsize=(12, 4), truth=None):
plt.figure(figsize=figsize)
ci_lower = (100 - level) / 2
ci_upper = (100 + level) / 2
plt.subplot(1, 2, 1)
plt.boxplot(param, whis=[ci_lower, ci_upper], showmeans=True, showfliers=False)
plt.xlabel('mixture components')
plt.ylabel(param_full_name)
plt.title('95% Credible Intervals for {}'.format(param_full_name))
if truth is not None:
for line in truth:
plt.axhline(line, ls=':')
plt.subplot(1, 2, 2)
plt.plot(param);
plt.xlabel('iterations')
plt.ylabel(param_full_name)
plt.title('Trace plot of {}'.format(param_full_name));
# Helper for plotting posterior distribution of all model parameters.
def plot_all_params(output, target_logprob_fn):
mu = output['mu'].numpy()
sigma = output['sigma'].numpy()
v = output['v']
alpha = output['alpha'].numpy()
eta = stickbreak(v).numpy()
plot_param_post(eta, 'eta', 'mixture weights', truth=simdata['w'])
plot_param_post(mu, 'mu', 'mixture locations', truth=simdata['mu'])
plot_param_post(sigma, 'sigma', 'mixture scales', truth=simdata['sig'])
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.hist(alpha, bins=30, density=True);
plt.xlabel("alpha")
plt.ylabel("density")
plt.title("Posterior distribution of alpha");
plt.subplot(1, 2, 2)
# Plot log joint posterior (unnormalized)
lp = [target_logprob_fn(mu=mu[i], sigma=sigma[i], alpha=alpha[i], v=v[i]) for i in range(len(mu))]
lp = np.vstack(lp).ravel()
plt.plot(lp)
plt.xlabel("iteration (post-burn)")
plt.ylabel("log joint posterior density (unnormalized)");
```
# Model Creation
```
# Number of mixture components.
ncomponents = 10
print('Create model ...')
model = create_dp_sb_gmm(nobs=len(simdata['y']), K=ncomponents)
# This allows the model to figure out dimensions of prob vector.
_ = model.sample()
print('Define log unnormalized joint posterior density ...')
def target_log_prob_fn(mu, sigma, alpha, v):
return model.log_prob(obs=y, mu=mu, sigma=sigma, alpha=alpha, v=v)
```
***
# ADVI
Credits: Thanks to Dave Moore at bayesflow for helping with the implementation!
```
# This cell contains everything for initializing the
# variational distribution, which approximates the true posterior.
# ADVI is quite sensitive to initial distritbution.
tf.random.set_seed(7) # 7
# Create variational parameters.
qmu_loc = tf.Variable(tf.random.normal([ncomponents], dtype=np.float64) * 3, name='qmu_loc')
qmu_rho = tf.Variable(tf.random.normal([ncomponents], dtype=np.float64) * 2, name='qmu_rho')
qsigma_loc = tf.Variable(tf.random.normal([ncomponents], dtype=np.float64) - 2, name='qsigma_loc')
qsigma_rho = tf.Variable(tf.random.normal([ncomponents], dtype=np.float64) - 2, name='qsigma_rho')
qv_loc = tf.Variable(tf.random.normal([ncomponents - 1], dtype=np.float64) - 2, name='qv_loc')
qv_rho = tf.Variable(tf.random.normal([ncomponents - 1], dtype=np.float64) - 1, name='qv_rho')
qalpha_loc = tf.Variable(tf.random.normal([], dtype=np.float64), name='qalpha_loc')
qalpha_rho = tf.Variable(tf.random.normal([], dtype=np.float64), name='qalpha_rho')
# Create variational distribution.
surrogate_posterior = tfd.JointDistributionNamed(dict(
# qmu
mu=tfd.Independent(tfd.Normal(qmu_loc, tf.nn.softplus(qmu_rho)), reinterpreted_batch_ndims=1),
# qsigma
sigma=tfd.Independent(tfd.LogNormal(qsigma_loc, tf.nn.softplus(qsigma_rho)), reinterpreted_batch_ndims=1),
# qv
v=tfd.Independent(tfd.LogitNormal(qv_loc, tf.nn.softplus(qv_rho)), reinterpreted_batch_ndims=1),
# qalpha
alpha=tfd.LogNormal(qalpha_loc, tf.nn.softplus(qalpha_rho))))
# Run optimizer
# @tf.function(autograph=False) , experimental_compile=True) # Makes slower?
def run_advi(optimizer, sample_size=1, num_steps=2000, seed=1):
return tfp.vi.fit_surrogate_posterior(
target_log_prob_fn=target_log_prob_fn,
surrogate_posterior=surrogate_posterior,
optimizer=optimizer,
sample_size=sample_size,
seed=seed, num_steps=num_steps) # 200, 2000
opt = tf.optimizers.Adam(learning_rate=1e-2)
%time losses = run_advi(opt, sample_size=1)
plt.plot(losses.numpy())
plt.xlabel('Optimizer Iteration')
plt.ylabel('ELBO');
# Extract posterior samples from VI
advi_output = surrogate_posterior.sample(500)
plot_all_params(advi_output, target_log_prob_fn)
```
***
# MCMC
```
# Create initial values..
# For HMC, NUTS. Not necessary for ADVI, as ADVI has surrogate_posterior.
def generate_initial_state():
return [
tf.zeros(ncomponents, dtype, name='mu'),
tf.ones(ncomponents, dtype, name='sigma') * 0.1,
tf.ones([], dtype, name='alpha') * 0.5,
tf.fill(ncomponents - 1, value=np.float64(0.5), name='v')
]
# Create bijectors to transform unconstrained to and from constrained parameters-space.
# For example, if X ~ Exponential(theta), then X is constrained to be positive. A transformation
# that puts X onto an unconstrained space is Y = log(X). In that case, the bijector used
# should be the **inverse-transform**, which is exp(.) (i.e. so that X = exp(Y)).
#
# NOTE: Define the inverse-transforms for each parameter in sequence.
bijectors = [
tfb.Identity(), # mu
tfb.Exp(), # sigma
tfb.Exp(), # alpha
tfb.Sigmoid() # v
]
```
## HMC
```
# Define HMC sampler.
@tf.function(autograph=False, experimental_compile=True)
def hmc_sample(num_results, num_burnin_steps, current_state, step_size=0.01, num_leapfrog_steps=100):
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=current_state,
kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=step_size, num_leapfrog_steps=num_leapfrog_steps, seed=1),
bijector=bijectors),
trace_fn = lambda _, pkr: pkr.inner_results.is_accepted)
tf.random.set_seed(7)
# Compile time.
current_state = generate_initial_state() # generate initial values.
%time [mu, sigma, alpha, v], is_accepted = hmc_sample(1, 1, current_state=current_state)
# Run time.
current_state = generate_initial_state() # generate initial values.
%time [mu, sigma, alpha, v], is_accepted = hmc_sample(500, 500, current_state=current_state) # 14.2 seconds.
# Store posterior samples.
hmc_output = dict(mu=mu, sigma=sigma, alpha=alpha, v=v, acceptance_rate=is_accepted.numpy().mean())
# HMC posterior inference
plot_all_params(hmc_output, target_log_prob_fn)
```
## NUTS
```
# Define Nuts sampler.
# Improve performance by tracing the sampler using `tf.function`
# and compiling it using XLA.
@tf.function(autograph=False, experimental_compile=True)
def nuts_sample(num_results, num_burnin_steps, current_state, max_tree_depth=10):
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=current_state,
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=target_log_prob_fn,
max_tree_depth=max_tree_depth, step_size=0.01, seed=1),
bijector=bijectors),
num_adaptation_steps=num_burnin_steps, # should be smaller than burn-in.
target_accept_prob=0.8),
trace_fn = lambda _, pkr: pkr.inner_results.inner_results.is_accepted)
tf.random.set_seed(7)
# Compile time.
current_state = generate_initial_state()
%time [mu, sigma, alpha, v], is_accepted = nuts_sample(1, 1, current_state=current_state, max_tree_depth=1) # 15 seconds.
# Run time.
current_state = generate_initial_state()
%time [mu, sigma, alpha, v], is_accepted = nuts_sample(500, 500, current_state=current_state) # 36 seconds.
# Store posterior samples.
nuts_output = dict(mu=mu, sigma=sigma, alpha=alpha, v=v, acceptance_rate=is_accepted.numpy().mean())
# NUTS posterior inference
plot_all_params(nuts_output, target_log_prob_fn)
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Quantum Convolutional Neural Network
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/qcnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial implements a simplified <a href="https://www.nature.com/articles/s41567-019-0648-8" class="external">Quantum Convolutional Neural Network</a> (QCNN), a proposed quantum analogue to a classical convolutional neural network that is also *translationally invariant*.
This example demonstrates how to detect certain properties of a quantum data source, such as a quantum sensor or a complex simulation from a device. The quantum data source being a <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> that may or may not have an excitation—what the QCNN will learn to detect (The dataset used in the paper was SPT phase classification).
## Setup
```
!pip install tensorflow==2.4.1
```
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
```
Now import TensorFlow and the module dependencies:
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
## 1. Build a QCNN
### 1.1 Assemble circuits in a TensorFlow graph
TensorFlow Quantum (TFQ) provides layer classes designed for in-graph circuit construction. One example is the `tfq.layers.AddCircuit` layer that inherits from `tf.keras.Layer`. This layer can either prepend or append to the input batch of circuits, as shown in the following figure.
<img src="./images/qcnn_1.png" width="700">
The following snippet uses this layer:
```
qubit = cirq.GridQubit(0, 0)
# Define some circuits.
circuit1 = cirq.Circuit(cirq.X(qubit))
circuit2 = cirq.Circuit(cirq.H(qubit))
# Convert to a tensor.
input_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])
# Define a circuit that we want to append
y_circuit = cirq.Circuit(cirq.Y(qubit))
# Instantiate our layer
y_appender = tfq.layers.AddCircuit()
# Run our circuit tensor through the layer and save the output.
output_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)
```
Examine the input tensor:
```
print(tfq.from_tensor(input_circuit_tensor))
```
And examine the output tensor:
```
print(tfq.from_tensor(output_circuit_tensor))
```
While it is possible to run the examples below without using `tfq.layers.AddCircuit`, it's a good opportunity to understand how complex functionality can be embedded into TensorFlow compute graphs.
### 1.2 Problem overview
You will prepare a *cluster state* and train a quantum classifier to detect if it is "excited" or not. The cluster state is highly entangled but not necessarily difficult for a classical computer. For clarity, this is a simpler dataset than the one used in the paper.
For this classification task you will implement a deep <a href="https://arxiv.org/pdf/quant-ph/0610099.pdf" class="external">MERA</a>-like QCNN architecture since:
1. Like the QCNN, the cluster state on a ring is translationally invariant.
2. The cluster state is highly entangled.
This architecture should be effective at reducing entanglement, obtaining the classification by reading out a single qubit.
<img src="./images/qcnn_2.png" width="1000">
An "excited" cluster state is defined as a cluster state that had a `cirq.rx` gate applied to any of its qubits. Qconv and QPool are discussed later in this tutorial.
### 1.3 Building blocks for TensorFlow
<img src="./images/qcnn_3.png" width="1000">
One way to solve this problem with TensorFlow Quantum is to implement the following:
1. The input to the model is a circuit tensor—either an empty circuit or an X gate on a particular qubit indicating an excitation.
2. The rest of the model's quantum components are constructed with `tfq.layers.AddCircuit` layers.
3. For inference a `tfq.layers.PQC` layer is used. This reads $\langle \hat{Z} \rangle$ and compares it to a label of 1 for an excited state, or -1 for a non-excited state.
### 1.4 Data
Before building your model, you can generate your data. In this case it's going to be excitations to the cluster state (The original paper uses a more complicated dataset). Excitations are represented with `cirq.rx` gates. A large enough rotation is deemed an excitation and is labeled `1` and a rotation that isn't large enough is labeled `-1` and deemed not an excitation.
```
def generate_data(qubits):
"""Generate training and testing data."""
n_rounds = 20 # Produces n_rounds * n_qubits datapoints.
excitations = []
labels = []
for n in range(n_rounds):
for bit in qubits:
rng = np.random.uniform(-np.pi, np.pi)
excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))
labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)
split_ind = int(len(excitations) * 0.7)
train_excitations = excitations[:split_ind]
test_excitations = excitations[split_ind:]
train_labels = labels[:split_ind]
test_labels = labels[split_ind:]
return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \
tfq.convert_to_tensor(test_excitations), np.array(test_labels)
```
You can see that just like with regular machine learning you create a training and testing set to use to benchmark the model. You can quickly look at some datapoints with:
```
sample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))
print('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])
print('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])
```
### 1.5 Define layers
Now define the layers shown in the figure above in TensorFlow.
#### 1.5.1 Cluster state
The first step is to define the <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> using <a href="https://github.com/quantumlib/Cirq" class="external">Cirq</a>, a Google-provided framework for programming quantum circuits. Since this is a static part of the model, embed it using the `tfq.layers.AddCircuit` functionality.
```
def cluster_state_circuit(bits):
"""Return a cluster state on the qubits in `bits`."""
circuit = cirq.Circuit()
circuit.append(cirq.H.on_each(bits))
for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):
circuit.append(cirq.CZ(this_bit, next_bit))
return circuit
```
Display a cluster state circuit for a rectangle of <a href="https://cirq.readthedocs.io/en/stable/generated/cirq.GridQubit.html" class="external"><code>cirq.GridQubit</code></a>s:
```
SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))
```
#### 1.5.2 QCNN layers
Define the layers that make up the model using the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin QCNN paper</a>. There are a few prerequisites:
* The one- and two-qubit parameterized unitary matrices from the <a href="https://arxiv.org/abs/quant-ph/0507171" class="external">Tucci paper</a>.
* A general parameterized two-qubit pooling operation.
```
def one_qubit_unitary(bit, symbols):
"""Make a Cirq circuit enacting a rotation of the bloch sphere about the X,
Y and Z axis, that depends on the values in `symbols`.
"""
return cirq.Circuit(
cirq.X(bit)**symbols[0],
cirq.Y(bit)**symbols[1],
cirq.Z(bit)**symbols[2])
def two_qubit_unitary(bits, symbols):
"""Make a Cirq circuit that creates an arbitrary two qubit unitary."""
circuit = cirq.Circuit()
circuit += one_qubit_unitary(bits[0], symbols[0:3])
circuit += one_qubit_unitary(bits[1], symbols[3:6])
circuit += [cirq.ZZ(*bits)**symbols[6]]
circuit += [cirq.YY(*bits)**symbols[7]]
circuit += [cirq.XX(*bits)**symbols[8]]
circuit += one_qubit_unitary(bits[0], symbols[9:12])
circuit += one_qubit_unitary(bits[1], symbols[12:])
return circuit
def two_qubit_pool(source_qubit, sink_qubit, symbols):
"""Make a Cirq circuit to do a parameterized 'pooling' operation, which
attempts to reduce entanglement down from two qubits to just one."""
pool_circuit = cirq.Circuit()
sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])
source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])
pool_circuit.append(sink_basis_selector)
pool_circuit.append(source_basis_selector)
pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))
pool_circuit.append(sink_basis_selector**-1)
return pool_circuit
```
To see what you created, print out the one-qubit unitary circuit:
```
SVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))
```
And the two-qubit unitary circuit:
```
SVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))
```
And the two-qubit pooling circuit:
```
SVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))
```
##### 1.5.2.1 Quantum convolution
As in the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin</a> paper, define the 1D quantum convolution as the application of a two-qubit parameterized unitary to every pair of adjacent qubits with a stride of one.
```
def quantum_conv_circuit(bits, symbols):
"""Quantum Convolution Layer following the above diagram.
Return a Cirq circuit with the cascade of `two_qubit_unitary` applied
to all pairs of qubits in `bits` as in the diagram above.
"""
circuit = cirq.Circuit()
for first, second in zip(bits[0::2], bits[1::2]):
circuit += two_qubit_unitary([first, second], symbols)
for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):
circuit += two_qubit_unitary([first, second], symbols)
return circuit
```
Display the (very horizontal) circuit:
```
SVGCircuit(
quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))
```
##### 1.5.2.2 Quantum pooling
A quantum pooling layer pools from $N$ qubits to $\frac{N}{2}$ qubits using the two-qubit pool defined above.
```
def quantum_pool_circuit(source_bits, sink_bits, symbols):
"""A layer that specifies a quantum pooling operation.
A Quantum pool tries to learn to pool the relevant information from two
qubits onto 1.
"""
circuit = cirq.Circuit()
for source, sink in zip(source_bits, sink_bits):
circuit += two_qubit_pool(source, sink, symbols)
return circuit
```
Examine a pooling component circuit:
```
test_bits = cirq.GridQubit.rect(1, 8)
SVGCircuit(
quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))
```
### 1.6 Model definition
Now use the defined layers to construct a purely quantum CNN. Start with eight qubits, pool down to one, then measure $\langle \hat{Z} \rangle$.
```
def create_model_circuit(qubits):
"""Create sequence of alternating convolution and pooling operators
which gradually shrink over time."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:63')
# Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum
# scans incoming circuits and replaces these with TensorFlow variables.
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],
symbols[36:42])
model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])
model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],
symbols[57:63])
return model_circuit
# Create our qubits and readout operators in Cirq.
cluster_state_bits = cirq.GridQubit.rect(1, 8)
readout_operators = cirq.Z(cluster_state_bits[-1])
# Build a sequential model enacting the logic in 1.3 of this notebook.
# Here you are making the static cluster state prep as a part of the AddCircuit and the
# "quantum datapoints" are coming in the form of excitation
excitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state = tfq.layers.AddCircuit()(
excitation_input, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),
readout_operators)(cluster_state)
qcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
### 1.7 Train the model
Train the model over the full batch to simplify this example.
```
# Generate some training data.
train_excitations, train_labels, test_excitations, test_labels = generate_data(
cluster_state_bits)
# Custom accuracy metric.
@tf.function
def custom_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true)
y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)
return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
qcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
history = qcnn_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations, test_labels))
plt.plot(history.history['loss'][1:], label='Training')
plt.plot(history.history['val_loss'][1:], label='Validation')
plt.title('Training a Quantum CNN to Detect Excited Cluster States')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
## 2. Hybrid models
You don't have to go from eight qubits to one qubit using quantum convolution—you could have done one or two rounds of quantum convolution and fed the results into a classical neural network. This section explores quantum-classical hybrid models.
### 2.1 Hybrid model with a single quantum filter
Apply one layer of quantum convolution, reading out $\langle \hat{Z}_n \rangle$ on all bits, followed by a densely-connected neural network.
<img src="./images/qcnn_5.png" width="1000">
#### 2.1.1 Model definition
```
# 1-local operators to read out
readouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]
def multi_readout_model_circuit(qubits):
"""Make a model circuit with less quantum pool and conv operations."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:21')
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
return model_circuit
# Build a model enacting the logic in 2.1 of this notebook.
excitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_dual = tfq.layers.AddCircuit()(
excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model_dual = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_dual)
d1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)
d2_dual = tf.keras.layers.Dense(1)(d1_dual)
hybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])
# Display the model architecture
tf.keras.utils.plot_model(hybrid_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
#### 2.1.2 Train the model
```
hybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
hybrid_history = hybrid_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
```
As you can see, with very modest classical assistance, the hybrid model will usually converge faster than the purely quantum version.
### 2.2 Hybrid convolution with multiple quantum filters
Now let's try an architecture that uses multiple quantum convolutions and a classical neural network to combine them.
<img src="./images/qcnn_6.png" width="1000">
#### 2.2.1 Model definition
```
excitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_multi = tfq.layers.AddCircuit()(
excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))
# apply 3 different filters and measure expectation values
quantum_model_multi1 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi2 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi3 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
# concatenate outputs and feed into a small classical NN
concat_out = tf.keras.layers.concatenate(
[quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])
dense_1 = tf.keras.layers.Dense(8)(concat_out)
dense_2 = tf.keras.layers.Dense(1)(dense_1)
multi_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],
outputs=[dense_2])
# Display the model architecture
tf.keras.utils.plot_model(multi_qconv_model,
show_shapes=True,
show_layer_names=True,
dpi=70)
```
#### 2.2.2 Train the model
```
multi_qconv_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
multi_qconv_history = multi_qconv_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')
plt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],
label='Hybrid CNN \n Multiple Quantum Filters')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
```
| github_jupyter |
# Travel.State.Gov Visa Issuances
**Data Source:** [Monthly Immigrant Visa Issuance Statistics](https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/immigrant-visa-statistics/monthly-immigrant-visa-issuances.html) <br>
**Download the Output:** [here](../data/extracted_data/state-dept)
## Overview
This notebook provides functionality to "scrape" or extract all data from the PDF files found on the State Department Monthly Immigrant Visa Issuance Statistics page. The State Department releases monthly data on visa issuances, for both immigrant visas and non-immigrant visas.
The PDFs come in two forms.
* Posts --> Provides the counts of visas by post and class.
* FSC (Foreign State of Chargeability, or Place of Birth)--> Provides the counts of visas granted by FSC and by visa class.
<img src="../misc/images/monthly_visa_stats_pdf.png" width=500/>
In this notebook we will download these PDFs, extract structure data from the PDF and then create different data exports.
## Technical Approach
Using Python we will programattically download the PDFs, extract the information from them using [tabula](https://github.com/chezou/tabula-py) and finally we will combine the data sources to create a more comprehensive dataset.
## Skills Learned
1. How to download all PDF files to a local directory
2. How to extract structured data from all PDFs and recode the visa types to narrower categories.
3. How possibly summarize this data.
## The Code
**PLEASE NOTE**: We have made this notebook READ only to ensure you receive all updates we make to it. Do not edit this notebook directly, create a copy instead.
To customize and experiment with this notebook:
1. Create a copy: `Select File -> Make a Copy` at the top-left of the notebook
2. Unlock cells in your copy: Press `CMD + A` on your keyboard to select all cells, then click the small unlocked padlock button near the mid-top right of the notebook.
```
import logging
import logging.config
from pathlib import Path
import requests
from bs4 import BeautifulSoup
import pandas as pd
from PyPDF2 import PdfFileReader
import tabula
import time
from urllib.parse import urljoin, urlparse
pd.set_option("max_rows", 400)
today_date = time.strftime("%Y-%m-%d")
```
## 1. Download PDFs
**Functions**
```
def download_pdf(url: str, name: str, output_directory: str):
"""
Function to download a single PDF file from a provided link.
Parameters:
url: URL of the file you want to download
name: Label you want to apply to the file
output_folder: Folder path to save file
Returns:
Saves the file to the output directory, function itself returns nothing.
Example:
download_pdf(
'https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/immigrant-visa-statistics/monthly-immigrant-visa-issuances.html',
'July 2020 - IV Issuances by Post and Visa Class',
'state-dept/'
)
"""
output_directory = Path(output_directory)
response = requests.get(url)
if response.status_code == 200:
# Write content in pdf file
outpath = output_directory / f"{name}.pdf"
pdf = open(str(outpath), "wb")
pdf.write(response.content)
pdf.close()
print("File ", f"{name}.pdf", " downloaded")
else:
print("File ", f"{name}.pdf", " not found.")
def download_all_pdf_links(url: str, output_directory: str):
"""
Download all PDFs on a webpage where the PDFs
are presented as links. Uses the download_pdf function
defined above.
Parameters:
url (str): URL for website with links to many PDF documents, each PDF link
must be a direct download URL and not a URL to another website with PDF links.
output_directory: Folder path to savae file
Returns:
None, but saves many files to the output directory.
Examples:
download_all_pdf_links(
https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/immigrant-visa-statistics/monthly-immigrant-visa-issuances.html,
'state-dept')
"""
output_directory = Path(output_directory)
output_directory.mkdir(exist_ok=True, parents=True)
parse_url = urlparse(url)
base_url = f"{parse_url.scheme}://{parse_url.netloc}"
# Request URL and get response object
response = requests.get(url)
# Parse text obtained
soup = BeautifulSoup(response.text, "html.parser")
# Find all hyperlinks present on webpage
links = soup.find_all("a")
# Iterate through links we found,
# if it's a PDF link, download the PDF and save in output_directory
for link in links:
if ".pdf" in link.get("href", []):
name = link.text
url = f"{base_url}/{link.get('href')}"
download_pdf(url, name, output_directory)
print("All PDF files downloaded")
```
### Download Single Example File
Here we have the url for a single pdf and then pass that url (`example_pdf`) to the `download_pdf` function.
```
# July 2020 Post file https://travel.state.gov/content/dam/visas/Statistics/Immigrant-Statistics/MonthlyIVIssuances/JULY%202020%20-%20IV%20Issuances%20by%20Post%20and%20Visa%20Class.pdf
example_pdf = (
"https://travel.state.gov/content/dam/visas/Statistics/"
"Immigrant-Statistics/MonthlyIVIssuances/"
"JULY%202021%20-%20IV%20Issuances%20by%20Post%20and%20Visa%20Class.pdf"
)
download_pdf(
example_pdf,
"July 2020 - IV Issuances by Post and Visa Class",
"../data/raw_source_files/state-dept/",
)
```
### Download all files
Now let's download all PDFs on the State Department Visa Statistics page. We will pass the base url for that page to the `download_all_pdf_links` function, and then save them out to our `"../data/raw_source_files/state-dept"` folder.
```
url = "https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/immigrant-visa-statistics/monthly-immigrant-visa-issuances.html"
download_all_pdf_links(url, "../data/raw_source_files/state-dept")
```
----------------
## 2. Extract Data from PDFs
To extract structured data (in tabular format) from the PDFs we use a python package called [tabula-py](https://tabula-py.readthedocs.io/en/latest/). This package is a wrapper for a library written in the Java programming language called Tabula. It provides functionality to extract data from pdfs. We also use another python library called PdfFileReader to count the number of pages we need to process.
```
# Note below function not generalizable as has hard coded column names
def get_table_data(path: str, data_cols: list = ["Post", "Visa Class", "Issuances"]):
"""
Parameters:
path: path to specific PDF file to extract data from
data_cols: what the output data columns should be.
if processing the Post tables it is most likely:
["Post", "Visa Class", "Issuances"],
if processing the FSC tables it is most likley
["FSC", "Visa Class", "Issuances"]
Returns:
Pandas dataframe of structured (tabular) data extracted from the PDF
path provided.
Example:
get_table_data(
'data-repo-mvp/state-dept/April 2018 - IV Issuances by FSC or Place of Birth and Visa Class.pdf',
data_cols = ["FSC", "Visa Class", "Issuances"]
)
"""
# Read the PDF to get basic info on it
pdf = PdfFileReader(path)
# Data Holders
full_table = pd.DataFrame(columns=data_cols) # Will hold the combined data
# Processing PDF - we start with the first page (start)
# and go to the last page (stop)
start = 1
stop = pdf.getNumPages() + 1
for i in range(start, stop):
# Extract data from the specific PDF page using Tabula
df = tabula.read_pdf(
path,
pages=f"{i}",
lattice=True,
pandas_options={
"header": None
}, # none because some have headers and some dont
)[0]
# Edge case error correction - sometimes fully null extra columns
# are produced by Tabula
if df.shape[1] > 3:
full_null = df.isnull().all()
full_null_index = full_null[full_null].index[0]
if full_null_index:
df = df.drop(full_null_index, axis=1)
else:
print(f"ERROR on portion of table: {path}")
df.columns = data_cols
# Check if we have headers, if so drop 2 top rows
if not str(df.iloc[1][data_cols][2]).replace(",", "").isdigit():
df = df.loc[2:, :]
# Append this page of data to the full table
full_table = full_table.append(df)
# Clean up and validate the full table
# We validate by comparing the grand total column in the PDF
# to the sum of visas in the extracted table
full_table = full_table.reset_index(drop=True)
grand_total = full_table[
full_table[data_cols[0]].str.upper().str.contains("GRAND TOTAL")
]
full_table = full_table.drop(grand_total.index, axis=0)
full_table.loc[:, "Issuances"] = (
full_table.Issuances.astype(str).str.replace(",", "").astype(int)
)
table_grand_total = full_table.Issuances.sum()
row_grand_total = int(grand_total.Issuances.sum().replace(",", ""))
assert (
table_grand_total == row_grand_total
), f"Warning - Grand Total Row Does Not Equal Sum of Rows {row_grand_total} vs {table_grand_total}"
print("Data successfully extracted.")
return full_table
def extract_data_for_specific_year_month(
pdf_folder_path: str, year: int, month: str, report: str
):
"""
Helper function that allows you to extract data from a SINGLE PDF by passing
a folder path where PDF files are located and then retrieve a specific PDF based on a
year, named month (for example April or May) and report type of either fsc or post being present in the
PDF file name.
Parameters:
pdf_folder: path to folder holding PDFs
year: year of data to extract
month: month of data to extract
report: (options) --> posts | fsc
Returns:
Pandas dataframe of structured (tabular) data extracted from the PDF
path provided.
Example:
extract_data_for_specific_year_month('state-dept', 2019, 'August', 'fsc')
"""
pdf_folder = Path(pdf_folder_path)
report = report.lower()
target_filepath = None
data_cols = (
["Post", "Visa Class", "Issuances"]
if report == "post"
else ["FSC", "Visa Class", "Issuances"]
)
for file in pdf_folder.iterdir():
fn = file.name.lower()
if str(year).lower() in fn and str(month).lower() in fn and report in fn:
target_filepath = file
break
if target_filepath and target_filepath.exists():
return get_table_data(str(target_filepath), data_cols=data_cols)
def extract_data_from_many_pdfs(pdf_folder_path, start_year, stop_year, report):
"""
Helper function that allows you to extract data from MANY PDFs of a single
report type (FSC, POST) by passing a folder path where PDF files are located
and then retrieve data on all PDFs within a time range (start year to stop year)
and the report type
Parameters:
pdf_folder (str): path to folder holding PDFs
start_year (int | str): start year of data to extract
stop_year (int | str): stop year of data to extract
report (str): (options) --> posts | fsc
Returns:
Pandas dataframe of structured (tabular) data extracted from the PDF
path provided.
Example:
extract_data_for_specific_year_month('state-dept', 2019, 'August', 'fsc')
"""
months = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
visa_raw_data = []
for year in range(start_year, stop_year + 1):
for month in months:
data = extract_data_for_specific_year_month(
pdf_folder_path, year, month, report
)
if data is not None:
data["source"] = f"{year}-{month}"
visa_raw_data.append(data)
print(year, month, "- Processed")
else:
print(year, month, "- Not Available")
out_df = pd.concat(visa_raw_data, axis=0).reset_index(drop=True)
out_df["year_month"] = pd.to_datetime(out_df.source)
return out_df
```
### Extract data for years
Below we assign our paths to variables instead of writing them out in the function call, this is just to make the code more readable. We also apply Path(../path) to the paths as this provides some functionality for handling paths/folders etc.
```
downloaded_data_path = Path("../data/raw_source_files/state-dept/")
extracted_data_path = Path("../data/extracted_data/state-dept")
```
Below we call a function that was written a few cells above. This function leverages some additional functions to process each pdf and pull out the table data, then combine them together.
We will first extract all the data from the PDFs from 2019-2021, for the "Post and Visa Class" PDFs.
### Getting Posts
**Note this may take about 20 minutes to run**
Also, if processing 2017 -> 2021, then it may take even longer.
```
posts_data_2019_2021 = extract_data_from_many_pdfs(
downloaded_data_path, 2021, 2021, "post" # start year # end year # pdf type
)
```
**Now let's take a look at the data output**
We end up with a large table that has every row (Post, Visa class, issuances) from the pdfs aggregated together. We have also tagged each row with source data indicating the year and month of the data. We also have created a date field of that source info called `year_month` we can use to summarize data
```
posts_data_2019_2021.head()
```
### Getting FSC
**Note this may take about 20 minutes to run**
```
fsc_data_2019_2021 = extract_data_from_many_pdfs(
downloaded_data_path, 2021, 2021, "fsc"
)
```
**Now take a look at the output data**
This looks very much like the post data above, but instead of having a customs post as the first column we have the foriegn state of chargeability.
```
fsc_data_2019_2021.head()
```
### Export this data to csv
We can now call `to_csv` on each file to save it out.
```
posts_data_2019_2021.to_csv(extracted_data_path / f"raw_posts_extract-{today_date}.csv")
fsc_data_2019_2021.to_csv(extracted_data_path / f"raw_fsc_extract-{today_date}.csv")
```
------------
## 3. Analyze / Summarize Data
Now that we have this data in a structured format we will provide some examples of reformatting and summarizing this data to make it more useful
### Example 1: Get total visas by visa class per month for the Post data
```
summed_by_yearmonth_and_class_post = (
posts_data_2019_2021.groupby(["year_month", "Visa Class"]).sum().reset_index()
)
summed_by_yearmonth_and_class_post.pivot(
index="Visa Class", columns="year_month", values="Issuances"
).fillna(0)
```
### Example 2: Get total visas by visa class per month for the FSC data
```
summed_by_yearmonth_and_class_fsc = (
fsc_data_2019_2021.groupby(["year_month", "Visa Class"]).sum().reset_index()
)
summed_by_yearmonth_and_class_fsc.pivot(
index="Visa Class", columns="year_month", values="Issuances"
).fillna(0)
```
### Example 2: Get total visas by visa class per month with simplified coding
The state department uses many different visa class codes. From talking to experts in the field we understand that often codes change, new ones are added and olds ones are removed. That said, many of theses codes can be combined to summarized general families of visas which is helpful for analysis.
Below we have created and initial recoding of visas into a smaller number of classes. We are using a Python dictionary to recode different classes.
An example of the recoding is:
```
"IR": {
"1a": ["IR1", "CR1", "IB1", "IW1", "VI5", "IW"],
"1b": ["IR2", "CR2", "IB2", "IB3", "IW2"],
"1c": ["IR5"],
"1d": ["IR3", "IR4", "IH3", "IH4"],
},
```
Here we are saying that `["IR1", "CR1", "IB1", "IW1", "VI5", "IW"]` can all be recoded to a higher class of `1a` or an even higher level of `IR`.
We created this recode dictionary with some help from experts in the field but may have made mistakes or assumptions, therefore recognize that this recode is for example only.
```
recodes = {
"IR": {
"1a": ["IR1", "CR1", "IB1", "IW1", "VI5", "IW"],
"1b": ["IR2", "CR2", "IB2", "IB3", "IW2"],
"1c": ["IR5"],
"1d": ["IR3", "IR4", "IH3", "IH4"],
},
"FSP": {
"2a": ["F11", "F12", "B11", "B12", "F1"],
"2b": [
"F21",
"F22",
"F23",
"F24",
"F25",
"C21",
"C22",
"C23",
"C24",
"C25",
"B21",
"B22",
"B23",
"B24",
"B25",
"FX",
"FX1",
"FX2",
"FX3",
"CX",
"CX1",
"CX2",
"CX3",
"BX1",
"BX2",
"BX3",
],
"2c": ["F31", "F32", "F33", "C31", "C32", "C33", "B31", "B32", "B33", "F3"],
"2d": ["F41", "F42", "F43", "F4"],
},
"EB": {
"3a": ["E11", "E12", "E13", "E14", "E15", "E1"],
"3b": ["E21", "E22", "E23", "E2"],
"3c": ["E31", "E32", "E34", "E35", "EW3", "EW4", "EW5", "E3", "EW"],
"3d": [
"BC1",
"BC2",
"BC3",
"SD1",
"SD2",
"SD3",
"SE1",
"SE2",
"SE3",
"SF1",
"SF2",
"SG1",
"SG2",
"SH1",
"SH2",
"SJ1",
"SJ2",
"SK1",
"SK2",
"SK3",
"SK4",
"SL1",
"SN1",
"SN2",
"SN3",
"SN4",
"SR1",
"SR2",
"SR3",
"BC",
"E4",
"SD",
"SE",
"SF",
"SG",
"SH",
"SJ",
"SK",
"SN",
"SR",
],
"3e": [
"C51",
"C52",
"C53",
"T51",
"T52",
"T53",
"R51",
"R52",
"R53",
"I51",
"I52",
"I53",
"C5",
"T5",
"R5",
"I5",
],
},
"DI": ["DV1", "DV2", "DV3", "DV"],
"Other": [
"AM",
"AM1",
"AM2",
"AM3",
"SC2",
"SI1",
"SI2",
"SI3",
"SM1",
"SM2",
"SM3",
"SQ1",
"SQ2",
"SQ3",
"SU2",
"SU3",
"SU5",
"SB1",
"SC",
"SI",
"SM",
"SQ",
"SU",
],
}
```
**Create a coding lookup based on the `recode` dictonary above**
Now let's use some code to unpack these different recodings into a table format
```
unpack_codes = []
# iterate over the keys in the recode dictionary
for k in recodes:
next_level = recodes[k]
# if the value (next_level) is a dictionary then iterate over that as well
# this means that there is a sub level code such as `1a`
if isinstance(next_level, dict):
for sub_k in next_level:
unpack_codes += [[k, sub_k, val] for val in next_level[sub_k]]
else:
# if there are just detail values then we assign the `base_code`
# as the `sublevel code` as well
unpack_codes += [[k, k, val] for val in next_level]
coding_map = pd.DataFrame(
unpack_codes, columns=["base_code", "base_2_code", "detail_code"]
)
```
Below we see we have unpacked that information into a table with a row for each recode
The highest level is called the `base_code` and the sub code is called `base_2_code`, original code is called `detail_code`
```
coding_map
```
**Assign simplified codes to the dataframe**
We can merge the visa issuance data to the coding map to create different summaries
**Using the FSC data**
```
summary_data = coding_map.merge(
fsc_data_2019_2021, left_on="detail_code", right_on="Visa Class", how="right"
)
summary_data.base_code = summary_data.base_code.fillna("NA")
summary_data.detail_code = summary_data.detail_code.fillna("NA")
fsc_data_2019_2021.shape
summary_data
```
**Create a pivot table of simplified visa classes over time - using least granular coding**
We'll first summarize with the base code, after running the cell below you can see the most general visa class coding along with sums by year and month
```
base_code_summary_long = (
summary_data.groupby(["base_code", "year_month"]).Issuances.sum().reset_index()
)
print(base_code_summary_long.head())
base_code_summary_long.pivot(
index="base_code", columns="year_month", values="Issuances"
)
```
**Same as above but using the second level of coding as well**
```
base_code_summary_long = (
summary_data.groupby(["base_code", "base_2_code", "year_month"])
.Issuances.sum()
.reset_index()
)
print(base_code_summary_long.head())
base_code_summary_long_pivot = base_code_summary_long.pivot(
index=["base_code", "base_2_code"], columns="year_month", values="Issuances"
)
```
These summaries could then be exported to csv or excel using the `to_csv()` or `to_excel()` methods of the dataframe and used in additional analysis
```
base_code_summary_long_pivot.to_csv("../data/misc/state_dept_base_code_long_pivot.csv")
```
---------------------
## Appendix
# End
| github_jupyter |
# Using the GrainSizeTools script through JupyterLab or the notebook: first steps
> IMPORTANT NOTE: This Jupyter notebook example only applies to GrainSizeTools v3.0+ Please check your script version before using this notebook. You will be able to reproduce all the results shown in this tutorial using the dataset provided with the script, the ```file data_set.txt```
## Running the script in Jupyter lab/notebooks
The first step is to execute the code to get all the functionalities. JupyterLab (or Jupyter notebooks) allows you to run any code using the following code snippet: ``%run + the Python file to run``. In this case you must set the full filepath that indicates where the file ``GrainSizeTools_script.py`` is located in your system. If the script was executed correctly you will see that all GrainSizeTools (GST) modules have been loaded correctly and a welcome message as follows:
```
%run C:/Users/marco/Documents/GitHub/GrainSizeTools/grain_size_tools/GrainSizeTools_script.py
```
---
## Get information on the GrainSizeTools methods
First, to get a list of the main methods type
```
get.functions_list()
```
The script is implemented around several modules. To access a method within a module you will have to type the name of the module and then, separated by a dot, the name of the method.For example to access the method ``qq_plot`` of the plot module you should write
```python
plot.qq_plot()
```
and then provide the required parameters within the parenthesis.
To access the methods within a module, type the module name plus the dot and hit the tab key and a complete list of methods will pop up.
### Get detailed information on methods
You can get detailed information about any method or function of the script in different ways. The first is through the console using the character ? before the method
```
?conf_interval
```
Another option in Jupyter's lab is to get the information interactively without having to call it from the console. To do this, right-click on the mouse and select "Show Context Help" from the menu. Now, every time you write a method in the interactive console, all the information will automatically appear in the "Contextual help" window. In this case, you may prefer to rearrange the windows using drag and drop so that you can see the notebook and the contextual help in parallel.
---
# Importing tabular data
For this, [Pandas](https://pandas.pydata.org/about/index.html) is the de facto standard Python library for data analysis and manipulation of table-like datasets (CSV, excel or text files among others). The library includes several tools for reading files and handling of missing data and when running the GrainSizeTools script pandas is imported as ```pd``` for its general use.
All Pandas methods to read data are all named ```pd.read_*``` where * is the file type. For example:
```python
pd.read_csv() # read csv or txt files, default delimiter is ','
pd.read_table() # read general delimited file, default delimiter is '\t' (TAB)
pd.read_excel() # read excel files
pd.read_html() # read HTML tables
# etc.
```
For other supported file types see https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html
The only mandatory argument for the reading methods is to define the path (local or URL) with the location of the file to be imported as follows.
```
# set the filepath, note that is enclosed in quotation marks
filepath = 'C:/Users/marco/Documents/GitHub/GrainSizeTools/grain_size_tools/DATA/data_set.txt'
# import the data
dataset = pd.read_table(filepath)
#display the data
dataset
```
Some important things to note about the code snippet used above are:
- We used the ``pd.read_table()`` method to import the file. By default, this method assumes that the data to import is stored in a text file separated by tabs. Alternatively you can use the ``pd.read_csv()`` method (note that csv means comma-separated values) and set the delimiter to ``'\t'`` as follows: ``pd.read_csv(filepath, sep='\t')``.
- When calling the variable ``dataset`` it returs a visualization of the dataset imported, which is a tabular-like dataset with 2661 entries and 11 columns with different grain properties.
In Python, this type of tabular-like objects are called (Pandas) *DataFrame* and allow a flexible and easy to use data analysis. Just for checking:
```
# show the variable type
type(dataset)
```
> 👉 Pandas' reading methods give you a lot of control over how a file is read. To keep things simple, I list the most commonly used arguments:
```python
sep # Delimiter/separator to use.
header # Row number(s) to use as the column names. By default it takes the first row as the column names (header=0). If there is no columns names in the file you must set header=None
skiprows # Number of lines to skip at the start of the file (an integer).
na_filter # Detect missing value markers. False by default.
sheet_name # Only excel files, the excel sheet name either a number or the full name.
```
> more details on Pandas csv read method: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
The GrainSizeTools script includes a method named ```get_filepath()``` to get the path of a file through a file selection dialog instead of directly writing it. This can be used in two ways:
```python
# store the path in a variable (here named filepath for convenience) and then use it when calling the read method
filepath = get_filepath()
dataset = pd.read_csv(filepath, sep='\t')
# use get_filepath() directly within the read method
dataset = pd.read_csv(get_filepath(), sep='\t')
```
Lastly, Pandas also allows to directly import tabular data from the clipboard (i.e. data copied using copy-paste commands). For example, after copying the table from a text file, excel spreadsheet or a website using:
```python
dataset = pd.read_clipboard()
```
---
## Basic data manipulation (using Pandas)
Let's first see how the data set looks like. Instead of calling the variable (as in the example before) we now use the ``head()`` and ``tail()`` methods so that it only shows us the first (or last) rows of the data set
```
dataset.head() # returns 5 rows by default, you can define any number within the parenthesis
```
The example dataset has 11 different columns (one without a name). To interact with one of the columns we must call its name in square brackets with the name in quotes as follows:
```
# get the column 'Area' and multiplied by two
dataset['Area'] * 2
```
If you want to remove one or more columns, you can do it with the ``drop()`` method. For example, let's remove the column without a name.
```
# Remove the column without a name from the DataFrame
dataset.drop(columns=' ', inplace=True)
dataset.head(3)
# If you want to remove more than one column pass a list of columns
dataset.drop(columns=['FeretX', 'FeretY'], inplace=True)
dataset.head(3)
```
### Create new columns
The example dataset does not contain any column with the grain diameters and therefore we have to estimate them. For example, assuming the data comes from a thin section, you can estimate the apparent diameters from the section areas using the equivalent circular diameter (ECD) formula which is
$ECD = 2 \cdot \sqrt{areas / \pi}$
we will call the new column ``'diameters'``
```
dataset['diameters'] = 2 * np.sqrt(dataset['Area'] / np.pi)
dataset.head()
```
You can see a new column named diameters.
> 👉 In the examples above we define the square root as ``np.sqrt``, the arithmetic mean as ``np.mean``, and pi as ``np.pi``. In this case, ``np.`` stems for NumPy or numerical Python, a core package for scientific computing with Python, and the keyword after the dot is the method or the scientific value to be applied. If you write in the console ``np.`` and then press the TAB key, you will see a large list of available methods. In general, the method names are equivalent to those used in MATLAB or R but always by adding the ``np.`` first.
### A list of useful Pandas methods
Some things you might want to try (just copy-paste in interactive cells below and explore):
```python
# Reduction
dataset.mean() # estimate the mean for all columns
dataset['Area'].mean() # estimate the mean only for the column Area
dataset.std() # estimate the (Bessel corrected) standard deviation
dataset.median() # estimate the median
dataset.mad() # estimate the mean absolute deviation
dataset.var() # estimate the unbiased variace
dataset.sem() # estimate the standard error of the mean
dataset.skew() # estimate the sample skewness
dataset.kurt() # estimate the sample kurtosis
dataset.quantile() # estimate the sample quantile
# Information
dataset.describe() # generate descriptive statistics
dataset.info() # display info of the DataFrame
dataset.shape() # (rows, columns)
dataset.count() # number of non-null values
dataset.dropna() # remove missing values from the data
# writing to disk
dataset.to_csv(filename) # save as csv file, the filename must be within quotes
dataset.to_excel(filename) # save as excel file
```
```
# estimate the mean of all columns
dataset.mean()
# Generate descriptive statistics
dataset.describe()
dataset[['Circ.', 'Round', 'Solidity']].boxplot()
```
| github_jupyter |
# 2. Bayes Rule
The main goal of this post is to dig a bit further into Bayes rule, from a purely probabilistic perspective! Before we begin I do want to make one note; a great deal of the power of Bayes Rule comes in the form of bayesian inference and bayesian statistics, which can be found in the statistics section. I would recommend reading both of those posts as well if you are interested, since they demonstrate the application of Bayes rule to real world problems. If you have caught the bayesian bug at that point then I recommend reading my posts on Bayesian AB testing, found in the Machine Learning section.
One more thing to note: I am going to hold of on explaining the importance of Bayes Rule until the end, and its many use cases will in reality be spread throughout the aformentioned posts. Just another reason to go through them all. With that out of the way, let's begin!
## 1.1 Mathematical Definition
We worked with Bayes Rule briefly in the probability introduction, but just to recap, it can be derived as follows:
We know that the below statement represents the conditional probability of $A$ given $B$:
$$p(A \mid B)=\frac{p(A,B)}{p(B)}$$
And we also know that the opposite is also true:
$$p(B \mid A)=\frac{p(B,A)}{p(A)}$$
And since:
$$p(A,B)=p(B,A)$$
We can write:
$$p(A \mid B)=\frac{p(B \mid A)*p(A)}{p(B)}$$
Now, often times we may not have $p(B)$ directly, but this is just the marginal distribution of the joint probability $p(A,B)$, summed over all $p(A)$. It looks like:
$$p(B)=\sum_ip(A_i,B) = \sum_ip(B \mid A_i)*p(A_i)$$
If we are working with continuous distributions, sum turns into an integral.
Another way to think of this, is that the term on the bottom is just a normalization constant (Z) to ensure that the distribution sums to one.
$$p(A \mid B)=\frac{p(B \mid A)*p(A)}{Z}$$
Another way of saying this, is that they are proportional:
$$p(A \mid B)\propto p(B \mid A)*p(A)$$
Now this is a very powerful fact! Because the denominator ($p(B)$) does not depend on $A$, if we are simply trying to find the value of $A$ that maximizes the conditional probability of $p(A \mid B)$, we can ignore the denominator! In other words, this is used when we are trying to find the argmax of a distribution:
$$argmax_Ap(A \mid B)$$
So, we don't need to know the actual value of the probability, just the particular A that gives us the maximum probability. Because Z is independent of A:
$$argmax_Ap(A \mid B) = argmax_Ap(B \mid A)p(A)$$
This leads us into one of the main uses for Bayes Rule.
## 1.2 Bayes Rule for Classification
In the context of the Bayes Classifier, $y$ represents the class, and $x$ represents the data.
$$p(y \mid x)=\frac{p(x \mid y)*p(y)}{p(x)}$$
We refer to $p(x \mid y)$ as the **generative distribution**, because it tells us what the features look like for a specific class y, which we are already given.
Note, that while the bayes classifier does make use of bayes rule, it does NOT necessarily make use of bayesian statistics. For more information on exactly what that means please see the posts on Bayesian Statistics. Again, the purpose of this post is really to just demonstrate it's role when purely confined to basic probability problems.
## 2. Examples
### 2.1 The Monty Hall Problem
We are now going to go over a few brief examples where Bayes Rule can be applied in a simple proabilistic setting. First we can start with a very famous problem in probability know as **The Monty Hall Problem**. Imagine you are on a game show and you have to pick a door. There are 3 doors, and behind 1 of the doors there is a car, and behind the other two doors there are goats. Here is how the game works:
1. You pick a door (you do not get to see what is behind it) (door 1)
2. Monty Hall opens a door you didn't pick, always reveals a goat (door 2)
3. You are given a choice: stay with door 1, or switch to (door 3)
The big question is, which door should you choose?
#### 2.1.1 Which door should you chose
So, remember, you choose door 1, and each probability is conditioned on this. We then define the following:
$$ C = \text{where the car really is}$$
$$ p(C=1) = p(C=2) = p(C=3) = 1/3$$
For example, $p(C=1)$ represents the probability that a car is behind door 1. We can then define the random variable $H$:
$$ H = \text{random variable to represent the door that Monty Hall opens}$$
We can assume he opens door 2 without loss of generality, since the problem is symmetric.
$$p(H=2 \mid C=1) = 0.5$$
Remember that you chose door 1. So if the car is behind door 1, he can choose either door 2 or 3 since they will each be a goat. If the car is behind door 2, he cannot open door 2, so the probability is 0:
$$ p(H=2 \mid C=2) = 0$$
Similarly, if the car is behind door 3, then monty hall has to open door 2, since that is the only door left with a goat:
$$p(H=2 \mid C=3) = 1$$
Now, What probability do we actually want? We want to know if we should stick with door 1 or switch to door 3. In other words we want to compare:
$$p(C=1 \mid H=2) \text{ vs. } p(C=3 \mid H=2)$$
Now, we can do that using bayes rule!
$$p(A \mid B)=\frac{p(B \mid A)*p(A)}{p(B)}$$
$$p(A \mid B)=\frac{p(B \mid A)*p(A)}{\sum_ip(B \mid A_i)*p(A_i)}$$
Where in our case:
$$A: C=3 \;, B: H=2$$
$$p(C=3 \mid H=2) = \frac{p(H=2 \mid C=3)p(C=3)}{p(H=2)}$$
$$p(C=3 \mid H=2) = \frac{p(H=2 \mid C=3)p(C=3)}{p(H=2 \mid C=1)p(C=1)+p(H=2 \mid C=2)p(C=2)+p(H=2 \mid C=3)p(C=3)}$$
$$p(C=3 \mid H=2) = \frac{\frac{1}{3}}{\frac{1}{2}*\frac{1}{3}+0*\frac{1}{3}+1*\frac{1}{3}} = \frac{2}{3}$$
And we can similarly show:
$$p(C=1 \mid H=2) = \frac{1}{3}$$
Hence, by the above application of Bayes Rule it is clear that we should always switch doors!
#### 2.1.2 Mathematical Intuition
We can also think about the problem like so:
$$ p(C=1) = 1/3 $$
$$ p(C=2) = 1/3$$
$$ p(C=3) = 1/3$$
$$ p(C=2 \text{ or } C=3) = 2/3$$
Now lets say that we pick door 1, and monty hall opens door 2, showing us there is a goat behind it. We now know that $p(C=2) = 0$. In other words, monty has **revealed certain information to us** that we did not have originally. Hence, our equation $p(C=2 \text{ or } C=3) = 2/3$ still remains true, which means that $p(C=3) = 2/3$ and $p(C=1) = 1/3$. So we want to pick door 3! Note the reason this happens is because once door 2 is opened, it is known and is no longer a random variable.
#### 2.1.3 Advanced Intuition
Now, this problem is often referred to as a **paradox**. The reason it is viewed as a paradox is because it violates general human intuition and common sense. Now, this section will touch on some more advanced topics such as **causal analysis** (which will be covered in later posts), but I would feel remiss if I did not add a few sentences on the topic.
In general, human intuition operates under the logic of **causation**, while data conform to the logic of probabilties and proportions. Paradoxes often arise when we misapply the rules we have learned in one realm to another. In the case the Monty Hall problem, the main thing needed to resolve this apparent paradox is that we must take into account _not only the data_, but also the _data generating process_ (the rules of the game). The main idea is as follows:
> The way that we obtain information is no less important than the information itself.
Based on the rules of the game, we can deduce the following: If we open door 1, Monty cannot open door 1. However, he could have opened door 2. If, instead he choses to open door 3, it is more likely that he opened door 3 because he was forced to. This leads us to see that their is more evidence than before that the car is behind door 2.
If we start wading into the waters of causation, we learn that our minds rebel at the possibility of a correlation without a causation, since we learned to associate the two since birth. Causeless correlation violates our common sense.
### 2.2 Imbalanced Classes
Lets look at another example of where Bayes rule comes into play. Suppose we are doing disease testing. We would take a blood sample, extract some features from it, and output whether or not that person has the disease. So, we would have:
* Input: blood sample
* Output: Has disease, yes/no
Lets look further at a realistic scenario where this is involved. Most people are healthy and non diseased, most of the time. So, suppose that only 1% of the population has the disease. We can build a classifier that just predicts "no" each time. In other words, it doesn't learn anything. It is already correct for 99% of cases though! Hence, accuracy is not always the best metric to utilize. Perhaps we do not care about overall accuracy?
#### 2.2.1 So what should we measure?
What we actually want to measure is $p(predict=1 | disease=1)$. This is called the **true positive rate**. In medical terminology this is referred to as **sensitivity**. In information retrieval is is known as **hit rate** or **recall**.
We can solve for the above using bayes rule:
$$p(prediction=1 | disease=1) = \frac{p(prediction=1, disease=1)}{p(disease=1)}$$
Typically, we count 4 things:
1. **true positives** (you have the disease, and we predict you have the disease)
2. **true negatives** (you don't have the disease, and we predict you dont' have the disease)
3. **false positives** (you don't have the disease, and we predict you have the disease)
4. **false negatives** (you have the disease, and we predict you don't have the disease)
<br>
||Prediction = 1 |Prediction = 0|
|-|--------------|--------------|
|**Disease = 1**|True Positive |False Negative |
|**Disease = 0**|True Positive |False Negative |
#### 2.2.2 Sensitivity
With that said, we can calculate sensitivity as follows:
$$p(prediction=1 | disease=1) = \frac{p(prediction=1, disease=1)}{p(disease=1)}$$
$$sensitivity = recall = \frac{TP}{TP+FN}$$
#### 2.2.3 Specificity
And we can then calculate the **specificity** (the true negative rate):
$$p(prediction=0 | disease=0) = \frac{p(prediction=0, disease=0)}{p(disease=0)}$$
$$specificity = \frac{TN}{TN+FP}$$
#### 2.2.4 Precision
Now, in information retrieval, rather than specificity, we are interested in **precision**.
$$precision = \frac{TP}{TP+FP}$$
What is this the probability of? Well, $TP$ can be defined as:
$$TP = p(prediction=1, disease=1)$$
And $TP + FP$:
$$TP+FP = p(prediction=1)$$
Which then looks like:
$$precision = \frac{TP}{TP+FP} = \frac{p(prediction=1, disease=1)}{p(prediction=1)}$$
Which equals:
$$p(disease=1|prediction=1) = \frac{p(prediction=1, disease=1)}{p(prediction=1)}$$
This is a useful measure! Just because your results come back positive, does not mean that you have the disease! Generally, more testing is required! This will be explored further in the bayesian statistics section.
| github_jupyter |
# Walmart data EDA
#### March 23, 2019
#### Luis Da Silva
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression, LassoCV, ElasticNetCV
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import cross_val_score
from joblib import dump, load
import sys
sys.path.insert(0, 'D:\\OneDrive\\Git\\scikit-learn-helpers')
import sklearnHelpers as skh
```
# Read in all data
```
features = pd.read_csv('../data/features.csv')
stores = pd.read_csv('../data/stores.csv')
train = pd.read_csv('../data/train.csv')
test = pd.read_csv('../data/test.csv')
```
# "features" EDA
```
features.tail()
features['Date'] = pd.to_datetime(features['Date'], dayfirst=True)
features['Date'].describe()
features.isnull().sum()
# Find out were missing values are
missings = features[['Promotion1', 'Promotion2', 'Promotion3', 'Promotion4', 'Promotion5',
'CPI', 'Unemployment']].isnull()
missings['Date'] = features['Date']
n = 0
plt.figure(figsize=(15, 4))
for v in missings.drop('Date', axis=1):
missings[v] += n
missings[v].replace(n, np.nan, inplace=True)
missings[v] += np.random.normal(0, 0.2, missings.shape[0])
sns.scatterplot(data=missings, x='Date', y=v, label=v)
n += 1
plt.axvline(x='11-02-2012', color='black')
plt.title('Points show where missing values are in time')
plt.xlim(('2010-02-05', '2013-07-26'))
plt.legend(loc='upper left')
plt.ylabel('')
cur_axes = plt.gca()
cur_axes.axes.get_yaxis().set_visible(False)
plt.savefig('../graphs/missingData.png')
plt.show()
# Average of missing values
features[['Promotion1', 'Promotion2', 'Promotion3', 'Promotion4', 'Promotion5',
'CPI', 'Unemployment']].isnull().mean()[:5].mean()
features.describe()
```
#### Adding holidays defined as important
```
def append_holiday(df, dates, name, lag=0):
holy = {'Date': dates}
holy = pd.DataFrame(holy)
holy['Date'] = pd.to_datetime(holy['Date'])
holy[name] = 1
if lag != 0:
holy['Date'] = holy['Date'].apply(lambda x: x - dt.timedelta(days=lag))
df = df.merge(holy, on='Date', how='left')
df[name].fillna(0, inplace=True)
return df
dates = {'Superbowl': ('12/02/2010', '11/02/2011', '10/02/2012', '08/02/2013'),
'Labor': ('10/09/2010', '09/09/2011', '07/09/2012', '06/09/2013'),
'ThanksGiving': ('26/11/2010', '25/11/2011', '23/11/2012', '29/11/2013'),
'Christmas': ('31/12/2010', '30/12/2011', "28/12/2012", '27/12/2013')}
for event, date in dates.items():
features = append_holiday(features, date, event)
features = append_holiday(features, date, event + '_l7', 7)
features = append_holiday(features, date, event + '_l14', 14)
features = append_holiday(features, date, event + '_l-7', -7)
```
#### Promotions EDA
```
plt.figure(figsize=(20, 10))
for i in range(1,6):
var = 'Promotion' + str(i)
plt.subplot(5, 3, (i-1)*3+1)
sns.distplot(features[var].dropna(), rug=True)
plt.subplot(5, 3, (i-1)*3+2)
sns.distplot(np.log(features[var].replace(0, 0.01)).dropna())
plt.xlabel("log({})".format(var))
plt.subplot(5, 3, i*3)
sns.lineplot(features.index, features[var], ci=None)
plt.xlim(['2011-10-01', '2014-01-01'])
plt.tight_layout()
plt.show()
features['Day'] = features['Date'].dt.day
features['Month'] = features['Date'].dt.month
features['Year'] = features['Date'].dt.year
features['Week'] = features['Date'].apply(lambda x: x.isocalendar()[1])
features['Date-1'] = features['Date'].apply(lambda x: x.replace(year= x.year-1))
features['Date-2'] = features['Date'].apply(lambda x: x.replace(year= x.year-2))
# Current year vs next year
plt.figure(figsize=(15,4))
for k in ['Date', 'Date-1']:
sns.lineplot(data=features, y='Promotion1', x=k, label=k)
plt.xlim(('2011-11-01', '2012-11-01'))
plt.legend()
plt.show()
# Correlation heatmap
plt.figure(figsize=(12,10))
sns.heatmap(features.corr(), cmap='bwr', center=0)
plt.savefig('../graphs/featuresHeatmap.png')
plt.show()
for i in range(1,6):
mean_promo = features.groupby('Store')['Promotion{}'.format(i)].mean()
for p, name in ((75, 'High'), (25, 'Low')):
p = np.percentile(mean_promo, p)
p = pd.DataFrame(mean_promo >= p)
p.columns = ['{}Promoter{}'.format(name, i)]
p.reset_index(inplace=True)
features = features.merge(p, on='Store', how='left')
```
#### Temperature
```
plt.figure(figsize=(15, 3))
sns.lineplot(features['Date'], features['Temperature'], ci=None)
plt.title('Temperature')
plt.xlim(['2010-01-01', '2014-01-01'])
plt.savefig('../graphs/temperature.png')
plt.show()
```
#### CPI
```
plt.figure(figsize=(15, 3))
sns.lineplot(features['Date'], features['CPI'], ci=None)
plt.xlim(['2010-01-01', '2014-01-01'])
plt.title('Consumer Price Index')
plt.savefig('../graphs/cpi.png')
plt.show()
features[features['CPI'].isnull()]['Date'].unique()
```
#### Unemployment
```
plt.figure(figsize=(15, 3))
sns.lineplot(features['Date'], features['Unemployment'], ci=None)
plt.xlim(['2010-01-01', '2014-01-01'])
plt.title('Unemployment')
plt.savefig('../graphs/unemployment.png')
plt.show()
```
# Model to fill missing values
### Now, add some predictive features
```
targets = ['Promotion{}'.format(i) for i in range(1,6)]
predictors =['Temperature', 'Fuel_Price', 'Promotion1', 'CPI', 'Unemployment', 'IsHoliday',
'Year', 'Month', 'ImportantHoliday'] + ['Store_{}'.format(i) for i in range(1, 46)] + \
['Week_{}'.format(i) for i in range(1, 53)]
def append_lag_price(df, promo, lag=-7):
ndf = df.loc[:,['Date', 'Store', promo]]
ndf2 = df[['Date', 'Store']]
ndf['Date'] = ndf.loc[:,'Date'].apply(lambda x: x - dt.timedelta(days=lag))
name = promo+str(lag)
ndf.columns = ['Date', 'Store', name]
return ndf2.merge(ndf, on=['Date', 'Store'], how='left')[name]
for i in (7, 14, 21):
features['Promotion1'+str(i)] = append_lag_price(features, 'Promotion1', i)
features = pd.get_dummies(features, prefix=['Store', 'Week', 'Month'],
columns=['Store', 'Week', 'Month'])
features['IsHoliday'] = features['IsHoliday'].astype(int)
features_train = features[features['Date'] >= '11-05-2011'].fillna(0)
features_predict = features[features['Date'] < '11-05-2011']
```
## Train Random Forest Model, predict promotions and fill missing data
```
for i in range(3,6):
target = 'Promotion{}'.format(i)
# Prepare datasets
drop_columns = ['Promotion{}'.format(j) for j in range(1,6) if j!=i] + \
['HighPromoter{}'.format(j) for j in range(1,6) if j!=i] + \
['LowPromoter{}'.format(j) for j in range(1,6) if j!=i] + \
['Promotion17','Promotion114','Promotion121']
temp = features_train.drop(drop_columns, axis=1)
temp.dropna(inplace=True)
y = temp[target]
X = temp.drop([target, 'Date', 'Day', 'Year', 'Date-1', 'Date-2'], axis=1)
# Train model
rf = RandomForestRegressor(n_estimators=100, max_depth=None)
param_grid = {'n_estimators':[50], 'max_depth':[50]}
results = skh.tune_fit_model(X, y, rf, forward_selection=True, cv=3, verbose=True,
scoring='neg_mean_squared_error', stopping=5, param_grid=param_grid)
# Save results for later use
tsubset = results['subset']
dump(results, '../models/RandomForests_{}.joblib'.format(target))
# Append predictions
features_predict.loc[:,target] = results['model'].predict(features_predict[tsubset])
print(target, 'finished.')
print('-'*20)
print()
# Consolidate new dataset
features = pd.concat([features_predict, features_train], sort=False)
features.to_csv('../data/Filled_features.csv')
# If re-running the notebook without fitting random forests
i=4
target = 'Promotion{}'.format(i)
drop_columns = ['Promotion{}'.format(j) for j in range(1,6) if j!=i] + \
['HighPromoter{}'.format(j) for j in range(1,6) if j!=i] + \
['LowPromoter{}'.format(j) for j in range(1,6) if j!=i] + \
['Promotion17','Promotion114','Promotion121']
temp = features_train.drop(drop_columns, axis=1)
temp.dropna(inplace=True)
y = temp[target]
X = temp.drop([target, 'Date', 'Day', 'Year', 'Date-1', 'Date-2'], axis=1)
results = load('../models/RandomForests_Promotion{}.joblib'.format(i))
tsubset = results['subset']
rf = results['model']
y_test = rf.predict(X[tsubset])
plt.figure(figsize=(15,4))
sns.barplot(x=tsubset, y=rf.feature_importances_)
plt.tight_layout()
plt.show()
filled_features = pd.read_csv('../data/Filled_features.csv').iloc[:,1:]
filled_features['Date'] = pd.to_datetime(filled_features['Date'], dayfirst=True)
filled_features = filled_features[filled_features['Date'] < '11-05-2011']
filled_features.head()
np.concatenate([filled_features['Promotion3'], y_test[mask]])
plt.figure(figsize=(15,3))
n = '21'
i = 4
mask = temp['Store_'+n] == 1
mask2 = features_train['Store_'+n] == 1
mask3 = filled_features['Store_'+n] == 1
y_to_plot = np.concatenate([filled_features[mask3]['Promotion'+str(i)], y_test[mask]])
date_to_plot = np.concatenate([filled_features[mask3]['Date'], temp['Date'][mask]])
sns.lineplot(date_to_plot, y_to_plot, label='Predicted', color='red')
sns.lineplot(features_train['Date'][mask2], features_train['Promotion'+str(i)][mask2], label='Real')
#plt.xlim(('2011-12-01', '2013-04-01'))
plt.title('Promotions {} for store {}'.format(i, n))
plt.legend()
plt.savefig('../graphs/promotions{}Store{}'.format(i,n))
plt.show()
mask = features_predict['Store'] == 'Store_25'
plt.plot(features_predict.loc[:,'Date'][mask], features_predict.loc[:,'Promotion1'][mask])
plt.show()
features['Store'] = features['Store_1']
for i in range(2,46):
features.loc[features['Store_{}'.format(i)]==1,'Store'] = i
features.to_csv('../data/Filled_features.csv')
np.percentile(train.Weekly_Sales, (1, 50, 75, 9, 95, 99))
```
# "stores" EDA
```
stores.head()
stores.isnull().sum()
stores.shape
fig, ax = plt.subplots(figsize = (7, 7))
size = 0.3
counts =stores['Type'].value_counts()
sizes = stores.groupby('Type').sum()['Size (sq ft)']
count_labels = ["No. of {} stores".format(k) for k in stores['Type'].unique()]
sizes_labels = ["Total size of {} stores".format(k) for k in stores['Type'].unique()]
cmap = plt.get_cmap("tab20c")
outer_colors = cmap([0, 4, 8])
inner_colors = cmap([2, 6, 10])
ax.pie(counts, radius=1, colors=outer_colors,
wedgeprops=dict(width=size, edgecolor='w'),
autopct='%1.1f%%', pctdistance=0.85)
ax.pie(sizes, radius=1-size, colors=inner_colors,
wedgeprops=dict(width=size, edgecolor='w'),
autopct='%1.1f%%', pctdistance=0.75)
ax.set(aspect="equal", title='Distribution of Stores')
plt.legend(count_labels + sizes_labels)
plt.tight_layout()
plt.savefig('../graphs/distributionStores.png')
plt.show()
mask = stores['Size (sq ft)']<50000
stores[mask].sort_values('Size (sq ft)')
sns.pairplot(stores.loc[:,['Type', 'Size (sq ft)']], hue='Type', height=5)
plt.legend()
plt.show()
```
# "train" EDA
```
train.head()
train['Date'] = pd.to_datetime(train['Date'], dayfirst = True)
train.shape
train.groupby(['Store', 'Dept']).IsHoliday.sum().max()
train.isnull().sum()
train['IsHoliday'].mean()
train_per_week = train.groupby('Date').mean()
sns.distplot(train_per_week[~train_per_week['IsHoliday']]['Weekly_Sales'], label='Regular week')
sns.distplot(train_per_week[train_per_week['IsHoliday']]['Weekly_Sales'], rug=True, label='Holiday')
plt.legend()
plt.savefig('../graphs/holidayDist.png')
plt.show()
train['ImportantHoliday'] = np.logical_and(train['Weekly_Sales'] > 19000, train['IsHoliday'])
train[train['ImportantHoliday']].Date.unique()
train['Year'] = train.Date.dt.year
train['Month'] = train.Date.dt.month
train['Week'] = train.Date.dt.week
train['Year'].unique()
nyear = train['Year'].nunique()
plt.figure(figsize=(10,6))
for i, year in enumerate(train['Year'].unique()):
if i == nyear-1:
break
plt.subplot(nyear - 1, 1, i+1)
mask = np.logical_and(train['Week'] >= 50, train['Year'] == year)
sns.violinplot(data=train[mask], y='Weekly_Sales', x='Week')
plt.ylabel('Sales')
plt.title('Year {}'.format(year))
plt.tight_layout()
plt.savefig('../graphs/christmasPerYear.png')
plt.show()
bydept = pd.pivot_table(data=train, index='Date', columns='Dept', values='Weekly_Sales', aggfunc='mean')
for i in (1, 14, 96):
mask_a = np.logical_and(train['Store'] == 10, train['Dept'] == i)
mask_b = np.logical_and(train['Store'] == 30, train['Dept'] == i)
plt.figure(figsize=(15,3))
sns.lineplot(data=bydept, x=bydept.index, y=i, label='Average')
sns.lineplot(data=train[mask_a], x='Date', y='Weekly_Sales', label='Store 10')
sns.lineplot(data=train[mask_b], x='Date', y='Weekly_Sales', label='Store 30')
plt.ylabel('Sales')
plt.title('Department '+str(i))
plt.legend()
plt.savefig('../graphs/avgDept{}.png'.format(i))
plt.show()
for i in (1, 96):
mask = np.logical_and(train['Store'] == 10, train['Dept'] == i)
pd.plotting.autocorrelation_plot(train[mask]['Weekly_Sales'])
plt.show()
sales_size = train.merge(stores, on='Store')
sales_size['SalesSize'] = sales_size['Weekly_Sales']/sales_size['Size (sq ft)']*52
sales_sqft = sales_size.groupby('Type')['SalesSize'].median()*52
sales_store = sales_size.groupby('Type')['Weekly_Sales'].median()*52
sns.violinplot(data=sales_size, x='Type', y='Weekly_Sales')
plt.ylabel('Weekly Sales')
plt.savefig('../graphs/weeklySales.png')
sns.violinplot(data=sales_size, x='Type', y='SalesSize')
plt.ylabel('Yearly Sales per Squared Feet')
plt.savefig('../graphs/yearlySalesSqf.png')
```
# "Test"
```
test['Date'] = pd.to_datetime(test['Date'], dayfirst = True)
test.Date.describe()
test.shape
test.isnull().sum()
test.Date.nunique()
```
# Merging
```
data = {}
for n, df in (('train', train), ('test', test)):
df = df.merge(features.drop('IsHoliday', axis=1), on=['Date', 'Store'])
df = df.merge(stores, on='Store')
df.to_csv('../data/merged_{}_data.csv'.format(n))
print(df.shape)
data[n] = df
data['train'].head()
plt.figure(figsize=(7, 6))
sns.heatmap(data['train'].loc[:,'Weekly_Sales':].corr(), cmap='bwr', center=0)
plt.title("Full Train Dataset correlation heatmap")
plt.show()
```
| github_jupyter |
# Testing a new contribution
```
import numpy as np
import pandas as pd
from deep_nilmtk.utils.templates import ExperimentTemplate
from deep_nilmtk.models.pytorch import Seq2Point
from deep_nilmtk.models.pytorch.layers import *
from deep_nilmtk.disaggregator import NILMExperiment
from deep_nilmtk.data.loader import GeneralDataLoader
import torch.nn as nn
DATA_PATH = '../../data/ukdale.h5'
EXPERIMENT_NAME = 'residual_seq2point'
RESULTS_PATH = '../../residual_seq2point'
```
## Defining the model
we will here extend the seq2point with a residal connection.
``` python
class seq2point_residual(Seq2point):
def __init__(self, params):
self.model = Seq
pass
def forward(self,x ):
y_pred = x
return y_pred
def step(self, batch):
x, y = batch
return loss, mae
def predict(self, model, test_dataloader):
return results
@staticmethod
def get_template():
params={}
return params
```
```
class residual_block(nn.Module):
def __init__(self, in_size, hidden_size, out_size, filter_size=5):
super(residual_block, self).__init__()
self.conv = nn.Sequential(create_conv1(in_size, hidden_size, filter_size, bias=True, stride=1, padding=(filter_size-1)//2),
create_conv1(hidden_size, out_size, filter_size, bias=True, stride=1, padding=(filter_size-1)//2),
nn.ReLU())
def forward(self,x):
out = x + self.conv(x)
return out
class ResidualSeq2Point(Seq2Point):
def __init__(self, params):
super(ResidualSeq2Point, self).__init__(params)
self.enc_net = nn.Sequential(
residual_block(self.in_size, hidden_size=32, out_size=50, filter_size=7),
residual_block(50, hidden_size=16, out_size=50, filter_size=7),
nn.AdaptiveAvgPool1d(self.pool_filter),
nn.Flatten())
```
## Defining the data loader
The interface for defining a custom model is as follows:
```python
class new_nilm_loader(torch.utils.data.Dataset):
def __init__(self, params):
pass
def __len__(self):
pass
def __getitem__(self, index):
pass
def __copy__(self):
return self
```
Nonethless, for the considered model we can directly use the pre-defined data loader as the model follows a learning seq2seq approach.
## Benchmarking with existing baselines
```
max_epochs = 5
template = ExperimentTemplate( data_path=DATA_PATH,
template_name='ukdale',
list_appliances=['washing machine'],
list_baselines_backends=[('Seq2Pointbaseline', 'pytorch')],
in_sequence=121,
out_sequence=1,
max_epochs=max_epochs)
res_seq2point_nilm = NILMExperiment({
"model_class": ResidualSeq2Point,
"loader_class": GeneralDataLoader,
"model_name": 'res_seq2point',
'backend':'pytorch',
'in_size': 121,
'out_size':1,
'custom_preprocess':None,
'feature_type':'mains',
'input_norm':'z-norm',
'target_norm':'z-norm',
'seq_type':'seq2point',
'point_position':'mid_position',
'learning_rate':10e-5,
'max_nb_epochs': max_epochs
})
template.extend_experiment({
'res_seq2point':res_seq2point_nilm
})
template.__print__()
template.run_template(EXPERIMENT_NAME,
RESULTS_PATH,
f'{RESULTS_PATH}/mlflow')
```
## Checking the results
```
template.extend_template()
```
| github_jupyter |
## Coding Exercise #0702
### 1. Linear regression:
```
import numpy as np
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
```
#### 1.1. Data:
```
# Training data.
# hours of study (X) vs test score (y).
study = np.array([ 3, 4.5, 6, 1.2, 2, 6.9, 6.7, 5.5]) # Explanatory variable: X
score = np.array([ 88, 85, 90, 80, 81, 92, 95, 90]) # Response variable: y
```
#### 1.2. Define Variables and Placeholders:
```
b1 = tf.Variable(1.0) # A constant initial value.
b0 = tf.Variable(1.0) # A constant initial value.
X_ph = tf.placeholder(tf.float32, shape=(None)) # We don't need to fix the number of observations.
y_ph = tf.placeholder(tf.float32, shape=(None)) # We can just leave the size = None.
```
#### 1.3. Define the model:
```
y_model = b0 + b1*X_ph # Simple linear regression model.
```
#### 1.4. Define the loss function and the optimization method:
```
loss = tf.reduce_sum(tf.square(y_ph - y_model)) # L2 loss.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
# optimizer = tf.train.MomentumOptimizer(learning_rate = 0.001, momentum=0.9) # Momentum optimizer.
train = optimizer.minimize(loss) # Define training.
init = tf.global_variables_initializer() # Define Variable initialization.
```
#### 1.5. Training and Testing:
```
n_epochs = 5000 # N# of epochs (gradient descent steps).
with tf.Session() as sess:
# Variables initialization.
sess.run(init)
# Training.
my_feed = {X_ph:study, y_ph:score} # Prepare feed data as a dictionary.
for i in range(n_epochs):
sess.run(train, feed_dict = my_feed)
b0_model, b1_model = sess.run([b0, b1]) # Get the final values of the Variables.
# Testing.
mse = tf.reduce_mean(tf.square(y_ph - y_model)) # Define the test metric.
mse_value = sess.run(mse, feed_dict = my_feed) # Calculate the in-sample MSE.
```
#### 1.6. Display the result:
```
print("Parameters b0 = {:5.3f} , b1 = {:5.3f}".format(b0_model, b1_model))
print("MSE = {:5.3f}".format(mse_value))
print("RMSE = {:5.3f}".format(np.sqrt(mse_value)))
```
#### 1.7. Prediction:
```
# Define the testing data.
study_new = np.array([2.5, 3.3, 4.2]).reshape(-1,1)
X_ph = tf.placeholder(tf.float32, shape=(study_new.size,1))
y_model = b0_model + b1_model*X_ph # Define the prediction model.
with tf.Session() as sess:
my_feed = {X_ph:study_new}
y_pred_value = sess.run(y_model, feed_dict = my_feed)
# Predicted y values.
print(y_pred_value)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Neural Machine Translation with Luong Attention - Tensorflow 2.0
_Notebook orignially contributed by: [chunml](https://github.com/chunml)_
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github.com/tensorflow/examples/blob/master/community/en/nn_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/tree/master/community/en/nn_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
In this notebook, we will create a neural machine translation model using Sequence-To-Sequence learning and Luong attention mechanism. We will walkthrough the following steps:
- Create a vanilla Seq2Seq model that can overfit a tiny dataset of 20 English-French sentence pairs
- Train the vanilla Seq2Seq model on the full dataset
- Introduce Luong attention mechanism, compare results and visualize attention heatmaps
For a more detailed explanation on Luong attention mechanism and implementation, refer to my blog post at: [Neural Machine Translation With Attention Mechanism](https://machinetalk.org/2019/03/29/neural-machine-translation-with-attention-mechanism/)
## Setup
First, we need to install Tensorflow 2.0.
```
import tensorflow as tf
assert tf.__version__.startswith('2')
```
## Import and prepare data
Next, let's import the necessary packages and define the tiny dataset for overfitting. The tiny dataset contains only 20 pairs of English - French sentences, which were extracted from the original dataset we will use later on.
```
import numpy as np
import unicodedata
import re
raw_data = (
('What a ridiculous concept!', 'Quel concept ridicule !'),
('Your idea is not entirely crazy.', "Votre idée n'est pas complètement folle."),
("A man's worth lies in what he is.", "La valeur d'un homme réside dans ce qu'il est."),
('What he did is very wrong.', "Ce qu'il a fait est très mal."),
("All three of you need to do that.", "Vous avez besoin de faire cela, tous les trois."),
("Are you giving me another chance?", "Me donnez-vous une autre chance ?"),
("Both Tom and Mary work as models.", "Tom et Mary travaillent tous les deux comme mannequins."),
("Can I have a few minutes, please?", "Puis-je avoir quelques minutes, je vous prie ?"),
("Could you close the door, please?", "Pourriez-vous fermer la porte, s'il vous plaît ?"),
("Did you plant pumpkins this year?", "Cette année, avez-vous planté des citrouilles ?"),
("Do you ever study in the library?", "Est-ce que vous étudiez à la bibliothèque des fois ?"),
("Don't be deceived by appearances.", "Ne vous laissez pas abuser par les apparences."),
("Excuse me. Can you speak English?", "Je vous prie de m'excuser ! Savez-vous parler anglais ?"),
("Few people know the true meaning.", "Peu de gens savent ce que cela veut réellement dire."),
("Germany produced many scientists.", "L'Allemagne a produit beaucoup de scientifiques."),
("Guess whose birthday it is today.", "Devine de qui c'est l'anniversaire, aujourd'hui !"),
("He acted like he owned the place.", "Il s'est comporté comme s'il possédait l'endroit."),
("Honesty will pay in the long run.", "L'honnêteté paye à la longue."),
("How do we know this isn't a trap?", "Comment savez-vous qu'il ne s'agit pas d'un piège ?"),
("I can't believe you're giving up.", "Je n'arrive pas à croire que vous abandonniez."),
)
```
## Preprocess the data
The raw data can't be used yet. We need to apply some preprocessing steps such as:
- Convert strings from unicode to ascii
- Add space before punctuation
- Filter out unwanted tokens
- Add *start* and *end* tokens to target sentences
```
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def normalize_string(s):
s = unicode_to_ascii(s)
s = re.sub(r'([!.?])', r' \1', s)
s = re.sub(r'[^a-zA-Z.!?]+', r' ', s)
s = re.sub(r'\s+', r' ', s)
return s
raw_data_en, raw_data_fr = list(zip(*raw_data))
raw_data_en, raw_data_fr = list(raw_data_en), list(raw_data_fr)
raw_data_en = [normalize_string(data) for data in raw_data_en]
raw_data_fr_in = ['<start> ' + normalize_string(data) for data in raw_data_fr]
raw_data_fr_out = [normalize_string(data) + ' <end>' for data in raw_data_fr]
```
## Tokenize the raw data
Since neural networks only accept numeric arrays as input, we need to tokenize our data, i.e. convert the sentences to integer sequences. This task can be done easily with **Keras** preprocessing utility class.
```
en_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
en_tokenizer.fit_on_texts(raw_data_en)
data_en = en_tokenizer.texts_to_sequences(raw_data_en)
data_en = tf.keras.preprocessing.sequence.pad_sequences(data_en,
padding='post')
fr_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
fr_tokenizer.fit_on_texts(raw_data_fr_in)
fr_tokenizer.fit_on_texts(raw_data_fr_out)
data_fr_in = fr_tokenizer.texts_to_sequences(raw_data_fr_in)
data_fr_in = tf.keras.preprocessing.sequence.pad_sequences(data_fr_in,
padding='post')
data_fr_out = fr_tokenizer.texts_to_sequences(raw_data_fr_out)
data_fr_out = tf.keras.preprocessing.sequence.pad_sequences(data_fr_out,
padding='post')
```
## Create tf.data.Dataset
Next, we need to create an instance of tf.data.Dataset, which will help us create batches and iterate over the entire dataset.
```
BATCH_SIZE = 5
dataset = tf.data.Dataset.from_tensor_slices(
(data_en, data_fr_in, data_fr_out))
dataset = dataset.shuffle(20).batch(BATCH_SIZE)
```
## Create the Encoder
Now that we have done with the data preparation step, let's move on to creating the model. The vanilla Seq2Seq model consists of an Encoder and a Decoder.
The Encoder only has an embedding layer and an RNN layer (can be either vanilla RNN, LSTM or GRU). We also need a method to initialize the state (zero-state).
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_size, lstm_size):
super(Encoder, self).__init__()
self.lstm_size = lstm_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_size)
self.lstm = tf.keras.layers.LSTM(
lstm_size, return_sequences=True, return_state=True)
def call(self, sequence, states):
embed = self.embedding(sequence)
output, state_h, state_c = self.lstm(embed, initial_state=states)
return output, state_h, state_c
def init_states(self, batch_size):
return (tf.zeros([batch_size, self.lstm_size]),
tf.zeros([batch_size, self.lstm_size]))
```
## Create the Decoder
Next, we will create the Decoder. Basically, the Decoder is similar to the Encoder, with an additional Dense layer to map the RNN output to vocabulary space.
```
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_size, lstm_size):
super(Decoder, self).__init__()
self.lstm_size = lstm_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_size)
self.lstm = tf.keras.layers.LSTM(
lstm_size, return_sequences=True, return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, sequence, state):
embed = self.embedding(sequence)
lstm_out, state_h, state_c = self.lstm(embed, state)
logits = self.dense(lstm_out)
return logits, state_h, state_c
```
## Build the model
We will not use Keras' *compile* and *fit* methods this time, so we have to manually build the model. The easiest way to achieve that is to feed in some dummy data. We can also print out the outputs' shapes for debugging purpose.
```
en_vocab_size = len(en_tokenizer.word_index) + 1
fr_vocab_size = len(fr_tokenizer.word_index) + 1
EMBEDDING_SIZE = 32
LSTM_SIZE = 64
encoder = Encoder(en_vocab_size, EMBEDDING_SIZE, LSTM_SIZE)
decoder = Decoder(fr_vocab_size, EMBEDDING_SIZE, LSTM_SIZE)
initial_states = encoder.init_states(1)
encoder_outputs = encoder(tf.constant([[1, 2, 3]]), initial_states)
decoder_outputs = decoder(tf.constant([[1, 2, 3]]), encoder_outputs[1:])
```
## Create the loss function and the optimizer
Next, let's create the loss function. Since we apply zero padding to the source and target sequences, we need to mask those zeros out when computing the loss.
As for the optimizer, we will use Adam with default setting.
```
def loss_func(targets, logits):
crossentropy = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True)
mask = tf.math.logical_not(tf.math.equal(targets, 0))
mask = tf.cast(mask, dtype=tf.int64)
loss = crossentropy(targets, logits, sample_weight=mask)
return loss
optimizer = tf.keras.optimizers.Adam()
```
## The train_step function
Technically, that's what we call a function which run a full iteration, i.e. a forward pass followed by a backward pass. Since Tensorflow 2.0, we can use the *@tf.function* decorator to explicitly put a particular piece of code in static graph execution. If you want to debug, don't forget to remove it.
```
@tf.function
def train_step(source_seq, target_seq_in, target_seq_out, en_initial_states):
with tf.GradientTape() as tape:
en_outputs = encoder(source_seq, en_initial_states)
en_states = en_outputs[1:]
de_states = en_states
de_outputs = decoder(target_seq_in, de_states)
logits = de_outputs[0]
loss = loss_func(target_seq_out, logits)
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
```
## The predict function
It's always a good idea to see how well the model can do during training, since only monitoring the loss values doesn't tell us that much. Basically, the predict function only does a forward pass. However, on the Decoder's side, we will start with the *start* token. At every next time step, the output of the previous step will be used as the input of the current step.
```
def predict(test_source_text=None):
if test_source_text is None:
test_source_text = raw_data_en[np.random.choice(len(raw_data_en))]
print(test_source_text)
test_source_seq = en_tokenizer.texts_to_sequences([test_source_text])
print(test_source_seq)
en_initial_states = encoder.init_states(1)
en_outputs = encoder(tf.constant(test_source_seq), en_initial_states)
de_input = tf.constant([[fr_tokenizer.word_index['<start>']]])
de_state_h, de_state_c = en_outputs[1:]
out_words = []
while True:
de_output, de_state_h, de_state_c = decoder(
de_input, (de_state_h, de_state_c))
de_input = tf.argmax(de_output, -1)
out_words.append(fr_tokenizer.index_word[de_input.numpy()[0][0]])
if out_words[-1] == '<end>' or len(out_words) >= 20:
break
print(' '.join(out_words))
```
## The training loop
Here comes the training loop. We will train for 300 epochs and print out the loss value together with the translation result of a random English sentence (within the training data). Doing so will help us know if something went wrong along the way.
At first, the model made only non-sense translations, but we can see that it keeps getting better and better over time.
```
NUM_EPOCHS = 300
for e in range(NUM_EPOCHS):
en_initial_states = encoder.init_states(BATCH_SIZE)
predict()
for batch, (source_seq, target_seq_in, target_seq_out) in enumerate(dataset.take(-1)):
loss = train_step(source_seq, target_seq_in,
target_seq_out, en_initial_states)
print('Epoch {} Loss {:.4f}'.format(e + 1, loss.numpy()))
```
## Let's do a full inference
Finally, let's have the model translate all 20 training pairs. We can see that at this point, the model has completely learned by heart the training data, which is what we wanted at the beginning.
```
test_sents = (
'What a ridiculous concept!',
'Your idea is not entirely crazy.',
"A man's worth lies in what he is.",
'What he did is very wrong.',
"All three of you need to do that.",
"Are you giving me another chance?",
"Both Tom and Mary work as models.",
"Can I have a few minutes, please?",
"Could you close the door, please?",
"Did you plant pumpkins this year?",
"Do you ever study in the library?",
"Don't be deceived by appearances.",
"Excuse me. Can you speak English?",
"Few people know the true meaning.",
"Germany produced many scientists.",
"Guess whose birthday it is today.",
"He acted like he owned the place.",
"Honesty will pay in the long run.",
"How do we know this isn't a trap?",
"I can't believe you're giving up.",
)
for test_sent in test_sents:
test_sequence = normalize_string(test_sent)
predict(test_sequence)
```
## Let's train Seq2Seq on the full dataset
Now, we have created a fully functional Seq2Seq model. Let's use it to train on the full dataset, which contains approximately 160K English - French pairs. That's huge!
But don't worry since we don't have to make any change to the current workflow. We only need to download the full dataset, tweak the networks' hyperparameters, reinitialize everything and we are ready to train.
The training is gonna take a (long) while, so be patient!
```
import requests
import os
from zipfile import ZipFile
MODE = 'train'
URL = 'http://www.manythings.org/anki/fra-eng.zip'
FILENAME = 'fra-eng.zip'
BATCH_SIZE = 64
EMBEDDING_SIZE = 256
LSTM_SIZE = 512
NUM_EPOCHS = 15
# ================= DOWNLOAD AND READ THE DATA ======================
def maybe_download_and_read_file(url, filename):
if not os.path.exists(filename):
session = requests.Session()
response = session.get(url, stream=True)
CHUNK_SIZE = 32768
with open(filename, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk:
f.write(chunk)
zipf = ZipFile(filename)
filename = zipf.namelist()
with zipf.open('fra.txt') as f:
lines = f.read()
return lines
lines = maybe_download_and_read_file(URL, FILENAME)
lines = lines.decode('utf-8')
raw_data = []
for line in lines.split('\n'):
raw_data.append(line.split('\t'))
raw_data = raw_data[:-1]
# ================= TOKENIZATION AND ZERO PADDING ===================
raw_data_en, raw_data_fr = zip(*raw_data)
raw_data_en = [normalize_string(data) for data in raw_data_en]
raw_data_fr_in = ['<start> ' + normalize_string(data) for data in raw_data_fr]
raw_data_fr_out = [normalize_string(data) + ' <end>' for data in raw_data_fr]
en_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
en_tokenizer.fit_on_texts(raw_data_en)
data_en = en_tokenizer.texts_to_sequences(raw_data_en)
data_en = tf.keras.preprocessing.sequence.pad_sequences(data_en,
padding='post')
fr_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
fr_tokenizer.fit_on_texts(raw_data_fr_in)
fr_tokenizer.fit_on_texts(raw_data_fr_out)
data_fr_in = fr_tokenizer.texts_to_sequences(raw_data_fr_in)
data_fr_in = tf.keras.preprocessing.sequence.pad_sequences(data_fr_in,
padding='post')
data_fr_out = fr_tokenizer.texts_to_sequences(raw_data_fr_out)
data_fr_out = tf.keras.preprocessing.sequence.pad_sequences(data_fr_out,
padding='post')
dataset = tf.data.Dataset.from_tensor_slices(
(data_en, data_fr_in, data_fr_out))
dataset = dataset.shuffle(len(raw_data_en)).batch(
BATCH_SIZE, drop_remainder=True)
# ======================== BUILD THE MODEL ==========================
en_vocab_size = len(en_tokenizer.word_index) + 1
encoder = Encoder(en_vocab_size, EMBEDDING_SIZE, LSTM_SIZE)
initial_state = encoder.init_states(1)
test_encoder_output = encoder(tf.constant(
[[1, 23, 4, 5, 0, 0]]), initial_state)
fr_vocab_size = len(fr_tokenizer.word_index) + 1
decoder = Decoder(fr_vocab_size, EMBEDDING_SIZE, LSTM_SIZE)
de_initial_state = test_encoder_output[1:]
test_decoder_output = decoder(tf.constant(
[[1, 3, 5, 7, 9, 0, 0, 0]]), de_initial_state)
# ================ ADD GRADIENT CLIPPING OPTION =====================
optimizer = tf.keras.optimizers.Adam(clipnorm=5.0)
# ================== THIS NEEDS TO BE RE-RUN ========================
@tf.function
def train_step(source_seq, target_seq_in, target_seq_out, en_initial_states):
with tf.GradientTape() as tape:
en_outputs = encoder(source_seq, en_initial_states)
en_states = en_outputs[1:]
de_states = en_states
de_outputs = decoder(target_seq_in, de_states)
logits = de_outputs[0]
loss = loss_func(target_seq_out, logits)
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
# ===================== THE TRAINING LOOP ===========================
for e in range(NUM_EPOCHS):
en_initial_states = encoder.init_states(BATCH_SIZE)
predict()
for batch, (source_seq, target_seq_in, target_seq_out) in enumerate(dataset.take(-1)):
loss = train_step(source_seq, target_seq_in,
target_seq_out, en_initial_states)
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(
e + 1, batch, loss.numpy()))
```
## It's time for test
So after a while, our vanilla Seq2Seq has completed its training. Let's see how well it can translate. For sake of simplicity, we will use the same 20 English - French pairs that we defined above.
```
test_sents = (
'What a ridiculous concept!',
'Your idea is not entirely crazy.',
"A man's worth lies in what he is.",
'What he did is very wrong.',
"All three of you need to do that.",
"Are you giving me another chance?",
"Both Tom and Mary work as models.",
"Can I have a few minutes, please?",
"Could you close the door, please?",
"Did you plant pumpkins this year?",
"Do you ever study in the library?",
"Don't be deceived by appearances.",
"Excuse me. Can you speak English?",
"Few people know the true meaning.",
"Germany produced many scientists.",
"Guess whose birthday it is today.",
"He acted like he owned the place.",
"Honesty will pay in the long run.",
"How do we know this isn't a trap?",
"I can't believe you're giving up.",
)
for test_sent in test_sents:
test_sequence = normalize_string(test_sent)
predict(test_sequence)
```
Hmm, somehow acceptable, I think. But obviously, there are a lot of rooms for improvement. We're gonna look at the vanilla Seq2Seq's problems and see what we can do.
## Let's talk about Attention Mechanisms
As we saw from the result above, while the vanilla Seq2Seq can overfit a small dataset, it struggled with learning from a much larger one, especially long sequences.
A solution for that? Well, what if all the time steps within the Decoder gain access to the Encoder's state and have the right to decide which part to focus onto? That's the idea behind attention mechanisms.
## What is an attention layer made of?
So, we understand the idea of having an attention mechanism. Let's see things from the inside. Technically, we need to compute the following:
- The alignment vector
- The context vector
The alignment vector has the same length as the source sequence. Each of its values is the score (or the probability) of the corresponding word within the source sequence.
The context vector, on the other hand, is simply the weighted average of the Encoder's output. It will then be used by the Decoder to compute the final output.
If you want a fancier explanation, please refer to my blog post (link above). I have a bunch of images there.
Below is how we implement Luong attention in Python.
```
class LuongAttention(tf.keras.Model):
def __init__(self, rnn_size, attention_func):
super(LuongAttention, self).__init__()
self.attention_func = attention_func
if attention_func not in ['dot', 'general', 'concat']:
raise ValueError(
'Unknown attention score function! Must be either dot, general or concat.')
if attention_func == 'general':
# General score function
self.wa = tf.keras.layers.Dense(rnn_size)
elif attention_func == 'concat':
# Concat score function
self.wa = tf.keras.layers.Dense(rnn_size, activation='tanh')
self.va = tf.keras.layers.Dense(1)
def call(self, decoder_output, encoder_output):
if self.attention_func == 'dot':
# Dot score function: decoder_output (dot) encoder_output
# decoder_output has shape: (batch_size, 1, rnn_size)
# encoder_output has shape: (batch_size, max_len, rnn_size)
# => score has shape: (batch_size, 1, max_len)
score = tf.matmul(decoder_output, encoder_output, transpose_b=True)
elif self.attention_func == 'general':
# General score function: decoder_output (dot) (Wa (dot) encoder_output)
# decoder_output has shape: (batch_size, 1, rnn_size)
# encoder_output has shape: (batch_size, max_len, rnn_size)
# => score has shape: (batch_size, 1, max_len)
score = tf.matmul(decoder_output, self.wa(
encoder_output), transpose_b=True)
elif self.attention_func == 'concat':
# Concat score function: va (dot) tanh(Wa (dot) concat(decoder_output + encoder_output))
# Decoder output must be broadcasted to encoder output's shape first
decoder_output = tf.tile(
decoder_output, [1, encoder_output.shape[1], 1])
# Concat => Wa => va
# (batch_size, max_len, 2 * rnn_size) => (batch_size, max_len, rnn_size) => (batch_size, max_len, 1)
score = self.va(
self.wa(tf.concat((decoder_output, encoder_output), axis=-1)))
# Transpose score vector to have the same shape as other two above
# (batch_size, max_len, 1) => (batch_size, 1, max_len)
score = tf.transpose(score, [0, 2, 1])
# alignment a_t = softmax(score)
alignment = tf.nn.softmax(score, axis=2)
# context vector c_t is the weighted average sum of encoder output
context = tf.matmul(alignment, encoder_output)
return context, alignment
```
## Update the Decoder to use Luong Attention
In order to use the attention above, we need to make a couple of small changes. We will start off with the Decoder.
```
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_size, rnn_size, attention_func):
super(Decoder, self).__init__()
self.attention = LuongAttention(rnn_size, attention_func)
self.rnn_size = rnn_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_size)
self.lstm = tf.keras.layers.LSTM(
rnn_size, return_sequences=True, return_state=True)
self.wc = tf.keras.layers.Dense(rnn_size, activation='tanh')
self.ws = tf.keras.layers.Dense(vocab_size)
def call(self, sequence, state, encoder_output):
# Remember that the input to the decoder
# is now a batch of one-word sequences,
# which means that its shape is (batch_size, 1)
embed = self.embedding(sequence)
# Therefore, the lstm_out has shape (batch_size, 1, rnn_size)
lstm_out, state_h, state_c = self.lstm(embed, initial_state=state)
# Use self.attention to compute the context and alignment vectors
# context vector's shape: (batch_size, 1, rnn_size)
# alignment vector's shape: (batch_size, 1, source_length)
context, alignment = self.attention(lstm_out, encoder_output)
# Combine the context vector and the LSTM output
# Before combined, both have shape of (batch_size, 1, rnn_size),
# so let's squeeze the axis 1 first
# After combined, it will have shape of (batch_size, 2 * rnn_size)
lstm_out = tf.concat(
[tf.squeeze(context, 1), tf.squeeze(lstm_out, 1)], 1)
# lstm_out now has shape (batch_size, rnn_size)
lstm_out = self.wc(lstm_out)
# Finally, it is converted back to vocabulary space: (batch_size, vocab_size)
logits = self.ws(lstm_out)
return logits, state_h, state_c, alignment
```
## Rebuild the model
Next, let's rebuild the model. Notice that we must now pass the Encoder's output to the Decoder.
```
# Set the score function to compute alignment vectors
# Can choose between 'dot', 'general' or 'concat'
ATTENTION_FUNC = 'concat'
encoder = Encoder(en_vocab_size, EMBEDDING_SIZE, LSTM_SIZE)
decoder = Decoder(fr_vocab_size, EMBEDDING_SIZE, LSTM_SIZE, ATTENTION_FUNC)
# These lines can be used for debugging purpose
# Or can be seen as a way to build the models
initial_state = encoder.init_states(1)
encoder_outputs = encoder(tf.constant([[1]]), initial_state)
decoder_outputs = decoder(tf.constant(
[[1]]), encoder_outputs[1:], encoder_outputs[0])
```
## Modify the train_step function
Next, we need to modify the function for training to compute the Decoder's output one step at a time, which means that we need to explicitly create a loop.
```
@tf.function
def train_step(source_seq, target_seq_in, target_seq_out, en_initial_states):
loss = 0
with tf.GradientTape() as tape:
en_outputs = encoder(source_seq, en_initial_states)
en_states = en_outputs[1:]
de_state_h, de_state_c = en_states
# We need to create a loop to iterate through the target sequences
for i in range(target_seq_out.shape[1]):
# Input to the decoder must have shape of (batch_size, length)
# so we need to expand one dimension
decoder_in = tf.expand_dims(target_seq_in[:, i], 1)
logit, de_state_h, de_state_c, _ = decoder(
decoder_in, (de_state_h, de_state_c), en_outputs[0])
# The loss is now accumulated through the whole batch
loss += loss_func(target_seq_out[:, i], logit)
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss / target_seq_out.shape[1]
```
## Modify the predict function
The change to make in the predict function is simple, we need to collect and return the alignment vectors. They will help generate some fancy attention heatmaps for visualization.
```
def predict(test_source_text=None):
if test_source_text is None:
test_source_text = raw_data_en[np.random.choice(len(raw_data_en))]
print(test_source_text)
test_source_seq = en_tokenizer.texts_to_sequences([test_source_text])
print(test_source_seq)
en_initial_states = encoder.init_states(1)
en_outputs = encoder(tf.constant(test_source_seq), en_initial_states)
de_input = tf.constant([[fr_tokenizer.word_index['<start>']]])
de_state_h, de_state_c = en_outputs[1:]
out_words = []
alignments = []
while True:
de_output, de_state_h, de_state_c, alignment = decoder(
de_input, (de_state_h, de_state_c), en_outputs[0])
de_input = tf.expand_dims(tf.argmax(de_output, -1), 0)
out_words.append(fr_tokenizer.index_word[de_input.numpy()[0][0]])
alignments.append(alignment.numpy())
if out_words[-1] == '<end>' or len(out_words) >= 20:
break
print(' '.join(out_words))
return np.array(alignments), test_source_text.split(' '), out_words
```
## Let's train Seq2Seq with Luong attention
The training loop is basically the same, although I recommend putting the call to predict function inside a *try-catch* to prevent crashing when the model's prediction is zero.
```
for e in range(NUM_EPOCHS):
en_initial_states = encoder.init_states(BATCH_SIZE)
for batch, (source_seq, target_seq_in, target_seq_out) in enumerate(dataset.take(-1)):
loss = train_step(source_seq, target_seq_in,
target_seq_out, en_initial_states)
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(
e + 1, batch, loss.numpy()))
try:
predict()
predict("How are you today ?")
except Exception:
continue
```
## Let's make some translations
Similar to what we did with the vanilla Seq2Seq, it's time to see how the model with Luong attention can translate the same 20 pairs of English - French sentences.
Furthermore, we can also visualize where the model focused on when making translation.
```
import matplotlib.pyplot as plt
import imageio
if not os.path.exists('heatmap'):
os.makedirs('heatmap')
test_sents = (
'What a ridiculous concept!',
'Your idea is not entirely crazy.',
"A man's worth lies in what he is.",
'What he did is very wrong.',
"All three of you need to do that.",
"Are you giving me another chance?",
"Both Tom and Mary work as models.",
"Can I have a few minutes, please?",
"Could you close the door, please?",
"Did you plant pumpkins this year?",
"Do you ever study in the library?",
"Don't be deceived by appearances.",
"Excuse me. Can you speak English?",
"Few people know the true meaning.",
"Germany produced many scientists.",
"Guess whose birthday it is today.",
"He acted like he owned the place.",
"Honesty will pay in the long run.",
"How do we know this isn't a trap?",
"I can't believe you're giving up.",
)
filenames = []
for i, test_sent in enumerate(test_sents):
test_sequence = normalize_string(test_sent)
alignments, source, prediction = predict(test_sequence)
attention = np.squeeze(alignments, (1, 2))
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='jet')
ax.set_xticklabels([''] + source, rotation=90)
ax.set_yticklabels([''] + prediction)
filenames.append('heatmap/test_{}.png'.format(i))
plt.savefig('heatmap/test_{}.png'.format(i))
plt.close()
with imageio.get_writer('translation_heatmaps.gif', mode='I', duration=2) as writer:
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
```
As you can see, we can tell by feeling that the model with Luong attention did make better translations than the vanilla model. For a more accurate metric, you can compute the BLEU scores and compare between the two.
As for the GIF image, we can download to our local machine and use any program of your choice to open it.
```
from google.colab import files
files.download('translation_heatmaps.gif')
```
## Final words
And that is that. We have finished creating a neural machine translation model with the renown Seq2Seq model and Luong-style attention mechanism.
For a more detailed explanation, feel free to jump to my blog post at [Neural Machine Translation With Attention Mechanism](https://machinetalk.org/2019/03/29/neural-machine-translation-with-attention-mechanism/).
If you have any problems, don't hesitate to let me know. Thank you for reading such a long post.
## Reference
- Neural Machine Translation With Attention Mechanism: [link](https://machinetalk.org/2019/03/29/neural-machine-translation-with-attention-mechanism/)
- Effective Approaches to Attention-based Neural Machine Translation paper (Luong attention): [link](https://arxiv.org/abs/1508.04025)
- Tensorflow Neural Machine Translation with (Bahdanau) Attention tutorial: [link](https://www.tensorflow.org/alpha/tutorials/sequences/nmt_with_attention)
- Luong’s Neural Machine Translation repository: [link](https://github.com/tensorflow/nmt)
| github_jupyter |
SAM008 - Spark using azdata
===========================
Description
-----------
### Parameters
```
spark_statement = "2+2"
max_tries_for_ready_state = 50
```
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportabilty, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
try:
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
j = load_json("sam008-spark-using-azdata.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
# rules that have 9 elements are the injected (output) rules (the ones we want). Rules
# with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029,
# not ../repair/tsg029-nb-name.ipynb)
if len(rule) == 9:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['no such host', 'TSG011 - Restart sparkhistory server', '../repair/tsg011-restart-sparkhistory-server.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']], 'azdata': [['azdata login', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb'], 'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']}
```
### Get the Kubernetes namespace for the big data cluster
Get the namespace of the Big Data Cluster use the kubectl command line
interface .
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
```
### Get the controller username and password
Get the controller username and password from the Kubernetes Secret
Store and place in the required AZDATA\_USERNAME and AZDATA\_PASSWORD
environment variables.
```
# Place controller secret in AZDATA_USERNAME/AZDATA_PASSWORD environment variables
import os, base64
os.environ["AZDATA_USERNAME"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.username}}', return_output=True)
os.environ["AZDATA_USERNAME"] = base64.b64decode(os.environ["AZDATA_USERNAME"]).decode('utf-8')
os.environ["AZDATA_PASSWORD"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.password}}', return_output=True)
os.environ["AZDATA_PASSWORD"] = base64.b64decode(os.environ["AZDATA_PASSWORD"]).decode('utf-8')
print(f"Controller username '{os.environ['AZDATA_USERNAME']}' and password stored in environment variables")
```
### Create a Spark Session
```
import os
import secrets
import json
session_name = secrets.token_urlsafe(16).replace("-", "_") # session name can't start with a '-' (when passed in with azdata)
print(session_name)
session_create = run(f'azdata bdc spark session create --name "{session_name}" --session-kind pyspark', return_output=True)
print(session_create)
session_create_json = json.loads(session_create)
print(session_create_json)
```
### Wait for Spark Session to finish starting
```
import json
session_id = session_create_json["id"]
state = "starting"
counter = 0
while state == "starting":
session_state = run(f'azdata bdc spark session state --session-id {session_id}', return_output=True)
print(session_state)
session_state_json = json.loads(session_state)
print (session_state_json)
state = session_state_json["state"]
counter = counter + 1
if counter == max_tries_for_ready_state:
raise SystemExit(f'Session has not moved out of starting state (after {max_tries_for_ready_state} attempts)')
if state == "dead" or state == "killed":
display(Markdown(f'HINT: Use [TSG034 - Livy logs](../log-analyzers/tsg034-get-livy-logs.ipynb) to resolve this issue.'))
raise SystemExit(f"Session moved from 'starting' to '{state}' state")
print (f"Session successfully moved out of 'starting' state to '{state}'")
```
### Create a Spark Statement
```
import json
statement_create = run(f'azdata bdc spark statement create --code "{spark_statement}" --session-id {session_id}', return_output=True)
statement_create_json = json.loads(statement_create)
print (statement_create_json)
statement_id = statement_create_json["id"]
```
### Wait for Spark Statement to complete
```
import json
statement_state = "waiting"
counter = 0
while statement_state == "waiting":
statement_info = run(f'azdata bdc spark statement info --session-id {session_id} --statement-id {statement_id}', return_output=True)
print(statement_info)
statement_info_json = json.loads(statement_info)
print (statement_info_json)
statement_state = statement_info_json["state"]
counter = counter + 1
if counter == 25:
raise SystemExit('Statement has not moved out of waiting state')
print(f'Statement completed successfully. Output: {statement_info_json["output"]["data"]["text/plain"]}')
```
### Get the Spark log for the session
```
run(f"azdata bdc spark session log --session-id {session_id}")
```
### Delete the Spark session
```
run(f"azdata bdc spark session delete --session-id {session_id}")
print('Notebook execution complete.')
```
| github_jupyter |
```
# Install default libraries
import pathlib
import sys
# Import installed modules
import pandas as pd
import numpy as np
import imageio
from tqdm import tqdm
# Import the Python script from the auxiliary folder
sys.path.insert(1, "../auxiliary")
import data_fetch
# Set a local download path and the URL to the 67P shape model data set
dl_path = "../kernels/dsk/"
# Set dictionary with 2 different resolutions
comet_models = {"low": "ROS_CG_M001_OSPCLPS_N_V1.OBJ",
"high": "ROS_CG_M004_OSPGDLR_N_V1.OBJ"}
# Which model?
model_type = "high"
# Shape model DL
dl_url = f"https://naif.jpl.nasa.gov/pub/naif/ROSETTA/kernels/dsk/{comet_models[model_type]}"
# If file not present: download it!
if not pathlib.Path(f"../kernels/dsk/{comet_models[model_type]}").is_file():
# Download the shape model, create (if needed) the download path and store the data set
data_fetch.download_file(dl_path, dl_url)
# Load the shape model. The first column lists whether the row is a vertex or face. The second,
# third and fourth column list the coordiantes (vertex) and vertex indices (faces)
comet_67p_shape_obj = pd.read_csv(f"../kernels/dsk/{comet_models[model_type]}", \
delim_whitespace=True, \
names=["TYPE", "X1", "X2", "X3"])
# Print some shape model information
print("Some statistics and parameters of the shape model")
print(f"Rows and columns of the data set: {comet_67p_shape_obj.shape}")
print(f"Number of vertices: {comet_67p_shape_obj.loc[comet_67p_shape_obj['TYPE']=='v'].shape[0]}")
print(f"Number of faces: {comet_67p_shape_obj.loc[comet_67p_shape_obj['TYPE']=='f'].shape[0]}")
# Print some examplarily extractions from the vertex and face sub set
print("Vertices (Sample)")
print(f"{comet_67p_shape_obj.loc[comet_67p_shape_obj['TYPE']=='v'].head()}")
print("\n")
print("Faces (Sample)")
print(f"{comet_67p_shape_obj.loc[comet_67p_shape_obj['TYPE']=='f'].head()}")
# Assign the vertices and faces
vertices = comet_67p_shape_obj.loc[comet_67p_shape_obj["TYPE"] == "v"][["X1", "X2", "X3"]].values \
.tolist()
faces = comet_67p_shape_obj.loc[comet_67p_shape_obj["TYPE"] == "f"][["X1", "X2", "X3"]].values
# Print the minimum and maximum vertex indices in the face sub set
print(f"Minimum vertex index in faces: {np.min(faces)}")
print(f"Maximum vertex index in faces: {np.max(faces)}")
# The index in the faces sub set starts at 1. For Python, it needs to start at 0.
faces = faces - 1
# Convert the indices to integer
faces = faces.astype(int)
# Convert the numpy array to a Python list
faces = faces.tolist()
# Now we need to define a main window class that is needed to set a window size / resolution.
# Based on the QT4 example:
# https://github.com/almarklein/visvis/blob/master/examples/embeddingInQt4.py
from PyQt5.QtWidgets import QWidget, QHBoxLayout
# Import visvis
import visvis as vv
# Define the class
class MainWindow(QWidget):
def __init__(self, *args):
QWidget.__init__(self, *args)
self.fig = vv.backends.backend_pyqt5.Figure(self)
self.sizer = QHBoxLayout(self)
self.sizer.addWidget(self.fig._widget)
self.setLayout(self.sizer)
self.setWindowTitle("Rosetta")
self.show()
# Create visvis application
app = vv.use()
app.Create()
# Create main window frame and set a resolution.
main_w = MainWindow()
main_w.resize(1200, 800)
# Create the 3 D shape model as a mesh. verticesPerFace equals 3 since triangles define the
# mesh"s surface in this case
vv.mesh(vertices=vertices, faces=faces, verticesPerFace=3)
# Get axes objects
axes = vv.gca()
# Set a black background
axes.bgcolor = "black"
# Deactivate the grid and make the x, y, z axes invisible
axes.axis.showGrid = False
axes.axis.visible = False
# Set some camera settings
# Please note: if you want to "fly" arond the comet with w, a, s, d (translation) and i, j, k, l
# (tilt) replace "3d" with "fly"
axes.camera = "3d"
# Field of view in degrees
axes.camera.fov = 60
# Set default azmiuth and elevation angle in degrees
axes.camera.azimuth = 120
axes.camera.elevation = 25
# ... and run the application!
app.Run()
# Now let"s create an animation
# Create visvis application
app = vv.use()
app.Create()
# Create main window frame and set a resolution.
main_w = MainWindow()
main_w.resize(500, 400)
# Create the 3 D shape model as a mesh. verticesPerFace equals 3 since triangles define the
# mesh"s surface in this case
shape_obj = vv.mesh(vertices=vertices, faces=faces, verticesPerFace=3)
shape_obj.specular = 0.0
shape_obj.diffuse = 0.9
# Get figure
figure = vv.gcf()
# Get axes objects and set figure parameters
axes = vv.gca()
axes.bgcolor = (0, 0, 0)
axes.axis.showGrid = False
axes.axis.visible = False
# Set camera settings
#
axes.camera = "3d"
axes.camera.fov = 60
axes.camera.zoom = 0.1
# Turn off the main light
axes.light0.Off()
# Create a fixed light source
light_obj = axes.lights[1]
light_obj.On()
light_obj.position = (5.0, 5.0, 5.0, 0.0)
# Empty array that contains all images of the comet"s rotation
comet_images = []
# Rotate camera in 300 steps in azimuth
for azm_angle in tqdm(range(360)):
# Change azimuth angle of the camera
axes.camera.azimuth = float(azm_angle)
# Draw the axes and figure
axes.Draw()
figure.DrawNow()
# Get the current image
temp_image = vv.getframe(vv.gca())
# Apped the current image in 8 bit integer
comet_images.append((temp_image*255).astype(np.uint8))
# Save the images as an animated GIF
imageio.mimsave("Comet67P.gif", comet_images, duration=0.04)
```
| github_jupyter |
# Text generation
```
import github_command as gt
gt.push(file_to_transfer="TD7_Text_Generation_With_LSTM.ipynb",
message="beam search",
repos="TDs_ESILV.git")
```
## Load Packages
```
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
!pip install nltk
import nltk
from nltk.text import Text
nltk.download('gutenberg')
alice = nltk.corpus.gutenberg.words('carroll-alice.txt')
# load ascii text and covert to lowercase
#filename = "wonderland.txt"
#raw_text = open(filename, 'r', encoding='utf-8').read()
#raw_text = raw_text.lower()
raw_text = " ".join(alice).lower()
# create mapping of unique chars to integers
chars = sorted(list(set(raw_text)))
char_to_int = dict((c, i) for i, c in enumerate(chars))
# summarize the loaded data
n_chars = len(raw_text)
n_vocab = len(chars)
print("Total Characters: {}".format(n_chars))
print("Total Vocab: {}".format(n_vocab))
# prepare the dataset of input to output pairs encoded as integers
seq_length = 100
dataX = []
dataY = []
for i in range(0, n_chars - seq_length, 1):
seq_in = raw_text[i:i + seq_length]
#print(seq_in)
seq_out = raw_text[i + seq_length]
#print(seq_out)
dataX.append([char_to_int[char] for char in seq_in])
#print(dataX)
dataY.append(char_to_int[seq_out])
#print(dataY)
n_patterns = len(dataX)
print("Total Patterns: {}".format(n_patterns))
###
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (n_patterns, seq_length, 1))
# normalize
X = X / float(n_vocab)
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
```
### Define the LSTM model
```
X.shape, y.shape
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
```
### Define the checkpoint
```
X.shape
filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
hist = model.fit(X, y, epochs=1, batch_size=500, callbacks=[checkpoint])
int_to_char = dict((i, c) for i, c in enumerate(chars))
import sys
# pick a random seed
start = numpy.random.randint(0, len(dataX)-1)
pattern = dataX[start]
print("Seed:")
print( "\"", ''.join([int_to_char[value] for value in pattern]), "\"")
# generate characters
for i in range(1000):
x = numpy.reshape(pattern, (1, len(pattern), 1))
x = x / float(n_vocab)
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
sys.stdout.write(result)
pattern.append(index)
pattern = pattern[1:len(pattern)]
print("\nDone.")
```
## Load Model
```
weights_file = "weights-improvement-01-2.4514.hdf5"
from keras.models import load_model
model = load_model(weights_file)
model.summary()
```
## Sampling from the Softmax
```
def sample_from_softmax(preds):
import numpy as np
preds = np.reshape(preds, -1)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
sample_from_softmax([0,0.2,0.8])
# pick a random seed
start = numpy.random.randint(0, len(dataX)-1)
pattern = dataX[start]
print("Seed:")
print( "\"", ''.join([int_to_char[value] for value in pattern]), "\"")
# generate characters
for i in range(1000):
# take the sequence ( <=> pattern), reshape
x = numpy.reshape(pattern, (1, len(pattern), 1))
# normalize
x = x / float(n_vocab)
# predict next character
prediction = model.predict(x, verbose=0)
# sample from softmax output for a little bit of variance
index = sample_from_softmax(prediction)
# transform index to char from dict
result = int_to_char[index]
# show back the pattern to the user
seq_in = [int_to_char[value] for value in pattern]
sys.stdout.write(result)
pattern.append(index)
# entry sequence must have same lenght so drop the first character
pattern = pattern[1:len(pattern)]
print("\nDone.")
```
## Beam Search
```
prediction
best_k
A = np.tile([2,32], (3,1))
B = np.c_[A, best_k]
c = np.random.randint(size=(3,), low=0, high=5)
D = np.random.randint(size=(3,3), low=0, high=5)
c = np.tile(c, (3,1))
c
D
c + D
e = np.argsort(c+D, None)[-k:]
np.argsort(np.multiply( c, D.T).T, None)[-3:]
test = np.array([1, 4, 5])
test
c.reshape(-1)[test]
# pick a random seed
start = numpy.random.randint(0, len(dataX)-1)
pattern = dataX[start]
print("Seed:")
print( "\"", ''.join([int_to_char[value] for value in pattern]), "\"")
k = 3
# take the sequence ( <=> pattern), reshape (batch=1, len seq, features)
x = np.tile(pattern, (k, 1))
x = np.reshape(x, (k, len(pattern), 1))
#best_k_scores = np.ones((k,1))
# generate characters
for i in range(1000):
if i==0:
# 1st prediction for next character
predictions = np.log(model.predict(x/float(n_vocab), batch_size=k))
# takes k best
best_k = np.argsort(predictions)[:,-k:][1]
best_k_scores = np.sort(predictions)[:,-k:][1]
# Append to x the best k and 2nd prediction for each k sequence
x = np.c_[x[...,0], best_k][:,1:]
predictions = np.log(model.predict(x[..., np.newaxis]/float(n_vocab), batch_size=k))
# takes k best FOR EACH k proposals
best_k_after = np.argsort(predictions)[:,-k:]
best_k_scores_after = np.sort(predictions)[:,-k:]
# takes best combi
best_combis_k = np.argsort(best_k_scores + best_k_scores_after, None)[-k:]
# update best_k
av = np.tile(best_k, (k,1)).reshape(-1)[best_combis_k]
print("best_k\n", best_k)
print("best_k_after\n", best_k_after)
print("best_k_scores\n", best_k_scores)
print("best_k_scores_after\n", best_k_scores_after)
print("best_combis_k\n", best_combis_k)
print("av\n", av)
x = np.c_[x[:,:-1], av] #(<=> take the last k stored, and put the best ones, leave first index)
display(x)
### best_k becomes best_k_after for next iteration, to be refined
best_k = best_k_after.reshape(-1)[best_combis_k]
best_k_scores = best_k_scores_after.reshape(-1)[best_combis_k]
print('after update:\n')
print("best_k\n", best_k)
print("best_k_scores\n", best_k_scores)
break
# show back the pattern to the user
#seq_in = [int_to_char[value] for value in pattern]
#sys.stdout.write(result)
#pattern.append(index)
print("\nDone.")
def beam_search_decoder(sequence, k, model):
# probabilities
probs = model.predict(x, verbose=0)
# select k best
best_k = np.argsort(probs)[-k:]
# append to original sequence as different proposal new sequences
proposals = [sequence + [elem] for elem in best_k]
# append
#
#
sequences = [[list(), 0.0]]
# walk over each step in sequence
for row in data:
all_candidates = list()
# expand each current candidate
for i in range(len(sequences)):
seq, score = sequences[i]
# for j in range(len(row)): # instead of exploring all the labels, explore only k best at the current time
# explore k best
for j in best_k:
candidate = [seq + [j], score + tf.math.log(row[j])]
all_candidates.append(candidate)
# order all candidates by score
ordered = sorted(all_candidates, key=lambda tup:tup[1], reverse=True)
# select k best
sequences = ordered[:k]
return sequences
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import keras
import numpy
import sklearn
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, confusion_matrix, precision_recall_curve, auc
from sklearn.utils import shuffle
from keras.models import Sequential
from keras.layers import Activation
from keras.layers import Dense
from keras.layers.core import Dropout
df = pd.read_csv("creditcard.csv")
df = shuffle(df)
df = df.drop(["Time"], axis = 1)
df.head()
len(df)
train_data = df[0:int(len(df) * 0.5)]
valid_data = df[int(len(df) * 0.5):int(len(df) * 0.75)]
test_data = df[int(len(df) * .75):len(df)]
train_features = train_data.iloc[:,0:len(train_data.columns) - 1].as_matrix()
valid_features = valid_data.iloc[:,0:len(valid_data.columns) - 1].as_matrix()
test_features = test_data.iloc[:,0:len(test_data.columns) - 1].as_matrix()
train_labels = train_data.iloc[:, len(train_data.columns) - 1: len(train_data.columns)].as_matrix().squeeze()
valid_labels = valid_data.iloc[:, len(valid_data.columns) - 1: len(valid_data.columns)].as_matrix().squeeze()
test_labels = test_data.iloc[:, len(test_data.columns) - 1: len(test_data.columns)].as_matrix().squeeze()
print(train_features.shape)
print(valid_features.shape)
print(test_features.shape)
#From Training Data, create undersampled
non_fraud_data = train_data[train_data["Class"] == 0]
fraud_data = train_data[train_data["Class"] == 1]
non_fraud_sample = non_fraud_data.sample(len(fraud_data))
frames = [fraud_data, non_fraud_sample]
df = pd.concat([fraud_data,non_fraud_data])
undersampled_df = pd.concat(frames)
undersampled_df = shuffle(undersampled_df)
undersampled_features = undersampled_df.iloc[:,0:len(undersampled_df.columns) - 1].as_matrix()
undersampled_labels = undersampled_df.iloc[:, len(undersampled_df.columns) - 1: len(undersampled_df.columns)].as_matrix().squeeze()
class_count = pd.value_counts(df['Class'], sort = True).sort_index()
class_count.plot(kind = 'barh')
plt.xlabel("Number of Cases")
plt.ylabel("Class")
plt.title("Original Class Proportions")
class_count = pd.value_counts(undersampled_df['Class'], sort = True).sort_index()
class_count.plot(kind = 'barh')
plt.xlabel("Number of Cases")
plt.ylabel("Class")
plt.title("Undersampled Class Proportions")
#create oversampled features and labels
non_fraud_data = train_data[train_data["Class"] == 0]
fraud_data = train_data[train_data["Class"] == 1]
fraud_sample = fraud_data.sample(len(non_fraud_data), replace = True)
frames = [fraud_sample, non_fraud_data]
oversampled_df = pd.concat(frames)
oversampled_df = shuffle(oversampled_df)
oversampled_features = oversampled_df.iloc[:,0:len(oversampled_df.columns) - 1].as_matrix()
oversampled_labels = oversampled_df.iloc[:, len(oversampled_df.columns) - 1: len(oversampled_df.columns)].as_matrix().squeeze()
print(len(non_fraud_data))
len(oversampled_features)
class_count = pd.value_counts(oversampled_df['Class'], sort = True).sort_index()
class_count.plot(kind = 'barh')
plt.xlabel("Number of Cases")
plt.ylabel("Class")
plt.title("Oversampled Class Proportions")
#Logistic Regression w/ Raw Data
clf = LogisticRegression(penalty="l2")
clf.fit(train_features, train_labels)
pred = clf.predict(test_features)
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels=[1,0]))
print("\nArea Under Precision Recall Curve:")
lg_precision, lg_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
auc(lg_recall, lg_precision)
clf = LogisticRegression(penalty="l2")
clf.fit(undersampled_features, undersampled_labels)
pred = clf.predict(test_features)
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
lgu_precision, lgu_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
auc(lgu_recall, lgu_precision)
clf = LogisticRegression(penalty="l2")
clf.fit(oversampled_features, oversampled_labels)
pred = clf.predict(test_features)
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels=[1,0]))
print("\nArea Under Precision Recall Curve:")
lgo_precision, lgo_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
auc(lgo_recall, lgo_precision)
plt.plot(lg_recall, lg_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.plot(lgu_recall, lgu_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.plot(lgo_recall, lgo_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
fig, ax = plt.subplots()
ax.plot(lg_recall, lg_precision, "r--", label= "Original AUPRC")
ax.plot(lgu_recall, lgu_precision, "g:", label= "Under-sampled AUPRC")
ax.plot(lgo_recall, lgo_precision, "b", label= "Over-sampled AUPRC")
legend = ax.legend(loc="lower center")
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Area Under the Precision-Recall Curve (Logistic Regression)")
plt.show()
#Fully Connected Neural Network
nn_model = Sequential()
nn_model.add(Dense(29, input_shape=(29,)))
nn_model.add(Activation('relu'))
# nn_model.add(Dropout(0.25))
nn_model.add(Dense(58, input_shape=(29,)))
nn_model.add(Activation('relu'))
nn_model.add(Dropout(0.25))
nn_model.add(Dense(1))
nn_model.add(Activation('sigmoid'))
nn_model.summary()
nn_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Fit Raw Data
nn_model.fit(train_features, train_labels, epochs=2)
pred1 = nn_model.predict(test_features)
pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
precision, recall, _ = precision_recall_curve(test_labels, pred1)
print(auc(recall, precision))
plt.plot(recall, precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
nn_model = Sequential()
nn_model.add(Dense(29, input_shape=(29,)))
nn_model.add(Activation('relu'))
# nn_model.add(Dropout(0.25))
nn_model.add(Dense(58, input_shape=(29,)))
nn_model.add(Activation('relu'))
nn_model.add(Dropout(0.25))
nn_model.add(Dense(1))
nn_model.add(Activation('sigmoid'))
nn_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Fit Undersampled
nn_model.fit(undersampled_features, undersampled_labels, epochs=2)
pred1 = nn_model.predict(test_features)
pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
us_precision, us_recall, _ = precision_recall_curve(test_labels, pred1)
print(auc(us_recall, us_precision))
plt.plot(us_recall, us_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
nn_model = Sequential()
nn_model.add(Dense(29, input_shape=(29,)))
nn_model.add(Activation('relu'))
# nn_model.add(Dropout(0.25))
nn_model.add(Dense(58, input_shape=(29,)))
nn_model.add(Activation('relu'))
nn_model.add(Dropout(0.25))
nn_model.add(Dense(1))
nn_model.add(Activation('sigmoid'))
nn_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
#Fit Oversampled
nn_model.fit(oversampled_features, oversampled_labels, epochs=5)
pred1 = nn_model.predict(test_features)
pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
os_precision, os_recall, _ = precision_recall_curve(test_labels, pred1)
print(auc(os_recall, os_precision))
pred1
plt.plot(os_recall, os_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
fig, ax = plt.subplots()
ax.plot(recall, precision, "r--", label= "Original AUPRC")
ax.plot(us_recall, us_precision, "g:", label= "Under-sampled AUPRC")
ax.plot(os_recall, os_precision, "b", label= "Over-sampled AUPRC")
legend = ax.legend(loc="lower center")
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Area Under the Precision-Recall Curve (Neural Network)")
plt.show()
from sklearn.ensemble import RandomForestClassifier
frst = RandomForestClassifier(n_estimators = 2, oob_score=True)
frst.fit(oversampled_features, oversampled_labels)
pred = frst.predict(valid_features)
# pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(valid_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(valid_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
f_precision, f_recall, _ = precision_recall_curve(valid_labels, pred)
print(auc(f_recall, f_precision))
pred1
plt.plot(f_precision, f_recall)
plt.xlabel("Recall")
plt.ylabel("Precision")
frst.predict_proba(valid_features)
from sklearn.cluster import KMeans
clf = KMeans(n_clusters=2)
clf.fit(train_features, train_labels)
clf.predict(train_features)
from sklearn.svm import LinearSVC
clf = LinearSVC()
clf.fit(train_features, train_labels)
pred = clf.predict(test_features)
# pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
sg_precision, sg_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
print(auc(sg_recall, sg_precision))
plt.plot(sg_recall, sg_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
clf = LinearSVC()
clf.fit(undersampled_features,undersampled_labels)
pred = clf.predict(test_features)
# pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
su_precision, su_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
print(auc(su_recall, su_precision))
plt.plot(su_recall, su_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
clf = LinearSVC()
clf.fit(oversampled_features,oversampled_labels)
pred = clf.predict(test_features)
# pred = [round(x[0]) for x in pred1]
print("Accuracy:")
print(accuracy_score(test_labels, pred))
print("\nConfusion Matrix:")
print("[[True Positive False Negative] \n [False Positive True Negative]]")
print(confusion_matrix(test_labels, pred, labels = [1,0]))
print("\nArea Under Precision Recall Curve:")
s_precision, s_recall, _ = precision_recall_curve(test_labels, clf.decision_function(test_features))
print(auc(s_recall, s_precision))
plt.plot(s_recall, s_precision)
plt.xlabel("Recall")
plt.ylabel("Precision")
fig, ax = plt.subplots()
ax.plot(sg_recall, sg_precision, "r--", label= "Original AUPRC")
ax.plot(su_recall, su_precision, "g:", label= "Under-sampled AUPRC")
ax.plot(s_recall, s_precision, "b", label= "Over-sampled AUPRC")
legend = ax.legend(loc="lower center")
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Area Under the Precision-Recall Curve (SVM)")
plt.show()
```
| github_jupyter |
```
import numpy as np
import math
import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
import time
# import subprocess
import random
%matplotlib inline
```
## Utils
```
def alter_coord(action, position, g_coord, dx=0.1, change_nodes=list(range(1,9))):
if action==0:
g_coord[int(2*change_nodes[position])]+=dx
g_coord[int(2*change_nodes[position])+1]+=dx
elif action==1:
g_coord[int(2*change_nodes[position])]+=dx
g_coord[int(2*change_nodes[position])+1]-=dx
if action==2:
g_coord[int(2*change_nodes[position])]-=dx
g_coord[int(2*change_nodes[position])+1]+=dx
elif action==3:
g_coord[int(2*change_nodes[position])]-=dx
g_coord[int(2*change_nodes[position])+1]-=dx
elif action==4:
g_coord[int(2*change_nodes[position])+1]-=0
return g_coord
# this function must be tailored to different FE models
def observe(position, coord, displ):
return position, coord[0], coord[1],coord[2], coord[3], coord[4], coord[5],coord[6], \
coord[7], coord[8], coord[9],coord[10], coord[11], coord[12], coord[13],coord[14], coord[15],\
coord[16], coord[17],coord[18], coord[19], np.max(abs(displ))
#np.sum(abs(displ))
#displ[2]
# displ[0],displ[1],displ[2],displ[3],displ[4],\
# displ[5],displ[6],displ[7],displ[8],displ[9],displ[10],displ[11],displ[12],displ[13],\
# displ[14],displ[15],displ[16],displ[17],displ[18],displ[19],displ[20],displ[21],\
# displ[22],displ[23],displ[24],displ[25],displ[26],displ[27],displ[28],displ[29]
```
## Finite Element Model of the Plane Truss structure
```
def PlaneFrameElementLength(x1,y1,x2,y2):
return math.sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1))
def PlaneFrameElementStiffness(E,A,I,L,theta):
pi=3.14159265
x = theta*pi/180
C = math.cos(x)
S = math.sin(x)
w1 = A*C*C + 12*I*S*S/(L*L)
w2 = A*S*S + 12*I*C*C/(L*L)
w3 = (A-12*I/(L*L))*C*S
w4 = 6*I*S/L
w5 = 6*I*C/L
return E/L*np.array([[w1, w3, -w4, -w1, -w3, -w4],[ w3, w2, w5, -w3, -w2, w5],
[-w4, w5, 4*I, w4, -w5, 2*I],[ -w1, -w3, w4, w1, w3, w4],
[-w3, -w2, -w5, w3, w2, -w5], [-w4, w5, 2*I, w4, -w5, 4*I]])
def PlaneFrameAssemble(K,k,i,j):
K[3*i,3*i] = K[3*i,3*i] + k[0,0]
K[3*i,3*i+1] = K[3*i,3*i+1] + k[0,1]
K[3*i,3*i+2] = K[3*i,3*i+2] + k[0,2]
K[3*i,3*j] = K[3*i,3*j] + k[0,3]
K[3*i,3*j+1] = K[3*i,3*j+1] + k[0,4]
K[3*i,3*j+2] = K[3*i,3*j+2] + k[0,5]
K[3*i+1,3*i] = K[3*i+1,3*i] + k[1,0]
K[3*i+1,3*i+1] = K[3*i+1,3*i+1] + k[1,1]
K[3*i+1,3*i+2] = K[3*i+1,3*i+2] + k[1,2]
K[3*i+1,3*j] = K[3*i+1,3*j] + k[1,3]
K[3*i+1,3*j+1] = K[3*i+1,3*j+1] + k[1,4]
K[3*i+1,3*j+2] = K[3*i+1,3*j+2] + k[1,5]
K[3*i+2,3*i] = K[3*i+2,3*i] + k[2,0]
K[3*i+2,3*i+1] = K[3*i+2,3*i+1] + k[2,1]
K[3*i+2,3*i+2] = K[3*i+2,3*i+2] + k[2,2]
K[3*i+2,3*j] = K[3*i+2,3*j] + k[2,3]
K[3*i+2,3*j+1] = K[3*i,3*j+1] + k[2,4]
K[3*i+2,3*j+2] = K[3*i+2,3*j+2] + k[2,5]
K[3*j,3*i] = K[3*j,3*i] + k[3,0]
K[3*j,3*i+1] = K[3*j,3*i+1] + k[3,1]
K[3*j,3*i+2] = K[3*j,3*i+2] + k[3,2]
K[3*j,3*j] = K[3*j,3*j] + k[3,3]
K[3*j,3*j+1] = K[3*j,3*j+1] + k[3,4]
K[3*j,3*j+2] = K[3*j,3*j+2] + k[3,5]
K[3*j+1,3*i] = K[3*j+1,3*i] + k[4,0]
K[3*j+1,3*i+1] = K[3*j+1,3*i+1] + k[4,1]
K[3*j+1,3*i+2] = K[3*j+1,3*i+2] + k[4,2]
K[3*j+1,3*j] = K[3*j+1,3*j] + k[4,3]
K[3*j+1,3*j+1] = K[3*j+1,3*j+1] + k[4,4]
K[3*j+1,3*j+2] = K[3*j+1,3*j+2] + k[4,5]
K[3*j+2,3*i] = K[3*j+2,3*i] + k[5,0]
K[3*j+2,3*i+1] = K[3*j+2,3*i+1] + k[5,1]
K[3*j+2,3*i+2] = K[3*j+2,3*i+2] + k[5,2]
K[3*j+2,3*j] = K[3*j+2,3*j] + k[5,3]
K[3*j+2,3*j+1] = K[3*j+2,3*j+1] + k[5,4]
K[3*j+2,3*j+2] = K[3*j+2,3*j+2] + k[5,5]
return K
def FEA_u(coord, elcon, bc_u_elim, f_after_u_elim, I=5e-5, A=1e-4, E=210e6):
K=np.zeros(shape=(3*np.max(elcon)+3,3*np.max(elcon)+3))
pi=3.14159265
for el in elcon:
L=PlaneFrameElementLength(coord[el[0]][0],coord[el[0]][1],coord[el[1]][0],coord[el[1]][1])
theta=math.atan((coord[el[1]][1]-coord[el[0]][1])/(coord[el[1]][0]-coord[el[0]][0]+1e-13))*180/pi
k=PlaneFrameElementStiffness(E,A,I,L,theta)
K=PlaneFrameAssemble(K,k,el[0],el[1])
K=np.delete(K,bc_u_elim,0)
K=np.delete(K,bc_u_elim,1)
d=np.dot(np.linalg.inv(K),f_after_u_elim)
ans=np.zeros(shape=(3*len(coord)))
j=0
for i in range(len(ans)):
if i not in bc_u_elim:
ans[i]=d[j]
j+=1
if j>len(d)-1:
break
return ans
```
## Neural Network Policy - Policy Gradients
```
# Details of model can be found in the book:
# Hands-On Machine Learning with Scikit-Learn & TensorFlow. Aurйlien Gйron
# the NN architecture must be tailored to different FE models
n_inputs = 22
n_hidden = 70
n_outputs = 5
initializer = tf.contrib.layers.variance_scaling_initializer()
learning_rate = 0.001
# Build the neural network
X_ = tf.placeholder(tf.float64, shape=[None, n_inputs], name="X_")
hidden = fully_connected(X_, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer)
hidden1 = fully_connected(hidden, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer)
logits = fully_connected(hidden1, n_outputs, activation_fn=None, weights_initializer=initializer)
outputs = tf.nn.softmax(logits, name="Y_proba")
# Select a random action based on the estimated probabilities
action = tf.multinomial(tf.log(outputs), num_samples=1,output_dtype=tf.int32)
y=tf.reshape(tf.one_hot(action,depth=5,dtype=tf.float64),[5,1])
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=tf.transpose(logits))
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(xentropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float64, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder, variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
def discount_rewards(rewards, discount_rate=0.97):
discounted_rewards = np.empty(len(rewards))
cumulative_rewards = 0
for step in reversed(range(len(rewards))):
cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate
discounted_rewards[step] = cumulative_rewards
return discounted_rewards
def discount_and_normalize_rewards(all_rewards, discount_rate=0.97):
all_discounted_rewards = [discount_rewards(rewards) for rewards in all_rewards]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards]
# this function must be tailored to different FE models
def reward_(obs_,obs):
# if np.max(abs(np.array(obs_[22:-1])))>np.max(abs(np.array(obs[22:-1]))):
# if sum(abs(np.array(obs_[22:-1])))>sum(abs(np.array(obs[22:-1]))):
# return sum(abs(np.array(obs_[22:-1]))>abs(np.array(obs[22:-1])))
# if abs(obs_[-1])>abs(obs[-1]):
if obs_[-1]>obs[-1]:
return 1
else:
return 0
# the training code must be tailored to different FE models
n_iterations =101 #251 # number of training iterations
n_max_steps = 500 #1000 # max steps per episode
n_games_per_update = 10 # train the policy every 10 episodes
save_iterations = 5 # save the model every 10 training iterations
with tf.Session() as sess:
start=time.time()
init.run()
# saver.restore(sess, tf.train.latest_checkpoint("./policy4/"))
# tf.get_default_graph()
for iteration in range(n_iterations):
all_rewards = [] # all sequences of raw rewards for each episode
all_gradients = [] # gradients saved at each step of each episode
for game in range(n_games_per_update):
current_rewards = [] # all raw rewards from the current episode
current_gradients = [] # all gradients from the current episode
pst=random.randint(0,7)
g_coord = alter_coord(4, pst, np.array([0.0,0,3,0,6,0,9,0,9,3,9,6,9,9,12,9,15,9,18,9]),
dx=0.1, change_nodes=list(range(1,9)))
displ = FEA_u(g_coord.reshape(10,2), elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]),
bc_u_elim=[0,1,2],
f_after_u_elim=np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-10,0,0]),
I=5e-5, A=2e-2, E=210e6)
obs=observe(pst, g_coord,displ)
for step in range(n_max_steps):
action_val, gradients_val = sess.run([action, gradients],
feed_dict={X_: np.array(obs).reshape(1,n_inputs)})
obs_=obs
g_coord = alter_coord(action_val[0][0], pst, g_coord,
dx=0.1, change_nodes=list(range(1,9)))
pst=random.randint(0,7)
if PlaneFrameElementLength(g_coord[0],g_coord[1],g_coord[2],g_coord[3])<0.02:
break
if PlaneFrameElementLength(g_coord[2],g_coord[3],g_coord[4],g_coord[5])<0.02:
break
if PlaneFrameElementLength(g_coord[4],g_coord[5],g_coord[6],g_coord[7])<0.02:
break
if PlaneFrameElementLength(g_coord[6],g_coord[7],g_coord[8],g_coord[9])<0.02:
break
if PlaneFrameElementLength(g_coord[8],g_coord[9],g_coord[10],g_coord[11])<0.02:
break
if PlaneFrameElementLength(g_coord[10],g_coord[11],g_coord[12],g_coord[13])<0.02:
break
if PlaneFrameElementLength(g_coord[12],g_coord[13],g_coord[14],g_coord[15])<0.02:
break
if PlaneFrameElementLength(g_coord[14],g_coord[15],g_coord[16],g_coord[17])<0.02:
break
if PlaneFrameElementLength(g_coord[16],g_coord[17],g_coord[18],g_coord[19])<0.02:
break
displ = FEA_u(g_coord.reshape(10,2), elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]),
bc_u_elim=[0,1,2],
f_after_u_elim=np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-10,0,0]),
I=5e-5, A=2e-2, E=210e6)
obs=observe(pst,g_coord,displ)
reward=reward_(obs_,obs)
current_rewards.append(reward)
current_gradients.append(gradients_val)
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
# At this point we have run the policy for 10 episodes, and we are
# ready for a policy update using the algorithm described earlier.
all_rewards = discount_and_normalize_rewards(all_rewards)
feed_dict = {}
for var_index, grad_placeholder in enumerate(gradient_placeholders):
# multiply the gradients by the action scores, and compute the mean
mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index]
for game_index, rewards in enumerate(all_rewards)
for step, reward in enumerate(rewards)],axis=0)
feed_dict[grad_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iterations == 0:
# print("Saving {} iteration".format(iteration))
print('Time taken for {} epoch {} sec\n'.format(iteration, time.time() - start))
saver.save(sess, "./policy4/pinjointed4.ckpt")
# end=time.time()
# print(end-start)
```
## AI designing the spool
```
def predict(coord):
with tf.Session() as sess:
saver = tf.train.import_meta_graph('./policy4/pinjointed4.ckpt.meta')
saver.restore(sess, "./policy4/pinjointed4.ckpt")
graph = tf.get_default_graph()
outputs = graph.get_tensor_by_name("Y_proba:0")
X_ = graph.get_tensor_by_name("X_:0")
# pst=random.randint(0,7)
j=0
pst=j%8
g_coord = alter_coord(4, pst, coord, dx=0.1, change_nodes=list(range(1,9)))
displ = FEA_u(g_coord.reshape(10,2), elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]),
bc_u_elim=[0,1,2],
f_after_u_elim=np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-10,0,0]),
I=5e-5, A=2e-2, E=210e6)
obs=observe(pst, g_coord, displ)
print("before: ", np.max(abs(displ)))
for step in range(50):
action_val= sess.run([outputs],feed_dict={X_: np.array(obs).reshape(1,n_inputs)})
action_val=np.log(action_val)
g_coord = alter_coord( np.argmax(action_val), pst, g_coord, dx=0.1, change_nodes=list(range(1,9)))
# print(pst)
# pst=random.randint(0,7)
j+=1
pst=j%8
if PlaneFrameElementLength(g_coord[0],g_coord[1],g_coord[2],g_coord[3])<0.02:
break
if PlaneFrameElementLength(g_coord[2],g_coord[3],g_coord[4],g_coord[5])<0.02:
break
if PlaneFrameElementLength(g_coord[4],g_coord[5],g_coord[6],g_coord[7])<0.02:
break
if PlaneFrameElementLength(g_coord[6],g_coord[7],g_coord[8],g_coord[9])<0.02:
break
if PlaneFrameElementLength(g_coord[8],g_coord[9],g_coord[10],g_coord[11])<0.02:
break
if PlaneFrameElementLength(g_coord[10],g_coord[11],g_coord[12],g_coord[13])<0.02:
break
if PlaneFrameElementLength(g_coord[12],g_coord[13],g_coord[14],g_coord[15])<0.02:
break
if PlaneFrameElementLength(g_coord[14],g_coord[15],g_coord[16],g_coord[17])<0.02:
break
if PlaneFrameElementLength(g_coord[16],g_coord[17],g_coord[18],g_coord[19])<0.02:
break
displ = FEA_u(g_coord.reshape(10,2), elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]),
bc_u_elim=[0,1,2],
f_after_u_elim=np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-10,0,0]),
I=5e-5, A=2e-2, E=210e6)
obs=observe(pst, g_coord, displ)
print("after: ", np.max(abs(displ)))
# print("after: ", abs(displ[2]))
return obs,g_coord
obs, g_coord = predict(np.array([0.0,0,3,0,6,0,9,0,9,3,9,6,9,9,12,9,15,9,18,9]))
g_coord
import matplotlib.pyplot as plt
def draw(coord,color,elcon):
coord=coord.reshape(np.max(elcon)+1,2)
plt.figure(figsize=(13,5))
for item in elcon:
plt.plot([coord[item[0]][0],coord[item[1]][0]],[coord[item[0]][1],coord[item[1]][1]],color=color)
plt.show()
```
### Initial Design
```
draw(np.array([0,0,3,0,6,0,9,0,9,3,9,6,9,9,12,9,15,9,18,9]),color="green",elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]))
```
### Design by AI
```
draw(g_coord,color="blue",elcon=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9]]))
```
| github_jupyter |
# ECCO-TCP
```
import lltk
# load corpus
C=lltk.load('ECCO_TCP')
# get some basic info
C.info()
```
## Install
### From pre-compiled zips
Only metadata and 1-gram counts are made available via download.
```
C.download(parts=['metadata','freqs'], force=False) # change force to True to redownload
```
## Preprocess
### Freqs
Only works if you have access to the text files.
```
# C.preprocess_freqs(force=True)
```
### Most Frequent Words (MFW)
```
C.mfw_yearbin=25
# Pre-compute the most frequent words for texts grouped in that yearbin
C.preprocess_mfw(num_proc=4)
```
### DTM
```
C.preprocess_dtm(n=25000)
```
### Breakdown by year
```
# Distribution of years
print(f'Min to max year: {C.metadata.year.min()} to {C.metadata.year.max()}')
lltk.density(C.metadata, 'year')
C.metadata_density('year')
# C.metadata.year.plot.hist
```
### By gender and nationality
```
C.metadata_barplot('gender')
C.metadata_barplot('nation',vertical=True,figsize=(8,6))
```
## Install
## Inspect
```
# Top 100 words overall, as determined by `n_agg` function over `valtype`
# Then a row for each of these 100 words in each period (`keep_periods` == True) if it's in there
mfw_df = C.mfw_df(
n=25000, # limit to top N words,
yearbin = 25, # any year delimiter. set to False for no periodizing
n_by_period=False, # top N per period or top N overall?
)
mfw_df
mfw_words = list(mfw_df.word)
len(mfw_words)
# Use different periods on the fly
C.mfw_df(n=100,keep_periods=True,yearbin=10)
# Top 100 words overall, as determined by `n_agg` function over `valtype`
# These scores returned (`keep_periods` == False)
C.mfw_df(n=100,keep_periods=False)
# Get value for a word over time
C.mfw_df(keep_periods=True).query('word == "isabel"')
# Change yearbin if you want
C.mfw_df(yearbin=100, keep_periods=True).query('word == "isabel"')
# Get all words and their counts for whole corpus
C.mfw_df(yearbin=False, n=None, keep_periods=True)
# plot overall top 10 words over the separate periods, where a period is a decade
fig=p9.ggplot(
p9.aes(x='period',y='fpm',color='word'),
data=C.mfw_df(n=10, keep_periods=True, excl_stopwords=True, excl_top=100)
)
fig+=p9.geom_point()
fig+=p9.geom_line(p9.aes(group='word'))
fig
```
### Document-Term Matrix (DTM)
```
# Build a document term matrix with the top n words (defaults to 25000)
C.preprocess_dtm(num_proc=4)
# Load dtm with top n words (defaults to 25000)
C.dtm()
# !rm -r /home/ryan/lltk_data/corpora/artfl/data
C.dtm(tf=True)
dtm_tfidf = C.dtm(tf=True)
dtm_tfidf
dtm_tfidf.loc['Hemingway,_Ernest.A_Farewell_to_Arms'].sort_values(ascending=False).head(25)
```
| github_jupyter |
# Final exercise
We've now covered all the topics on this course so to finish off, work through this final exercise. It is designed to give you a chance to practise what you've learned on some new code.
Make a new directory called `crypto`. In the Terminal change to that directory with `cd crypto` and in the Python Console change there with `%cd crypto`. In that directory make two new files called `morse.py` and `test_morse.py`:
```
%%writefile morse.py
# A lookup dictionary which, given a letter will return the morse code equivalent
_letter_to_morse = {'a':'.-', 'b':'-...', 'c':'-.-.', 'd':'-..', 'e':'.', 'f':'..-.',
'g':'--.', 'h':'....', 'i':'..', 'j':'.---', 'k':'-.-', 'l':'.-..', 'm':'--',
'n':'-.', 'o':'---', 'p':'.--.', 'q':'--.-', 'r':'.-.', 's':'...', 't':'-',
'u':'..-', 'v':'...-', 'w':'.--', 'x':'-..-', 'y':'-.--', 'z':'--..',
'0':'-----', '1':'.----', '2':'..---', '3':'...--', '4':'....-',
'5':'.....', '6':'-....', '7':'--...', '8':'---..', '9':'----.',
' ':'/'}
# This will create a dictionary that can go from the morse back to the letter
_morse_to_letter = {}
for letter in _letter_to_morse:
morse = _letter_to_morse[letter]
_morse_to_letter[morse] = letter
def encode(message):
morse = []
for letter in message:
letter = letter.lower()
morse.append(_letter_to_morse[letter])
# We need to join together Morse code letters with spaces
morse_message = " ".join(morse)
return morse_message
def decode(message):
english = []
# Now we cannot read by letter. We know that morse letters are
# separated by a space, so we split the morse string by spaces
morse_letters = message.split(" ")
for letter in morse_letters:
english.append(_morse_to_letter[letter])
# Rejoin, but now we don't need to add any spaces
english_message = "".join(english)
return english_message
%%writefile test_morse.py
from morse import encode, decode
def test_encode():
assert encode("SOS") == "... --- ..."
```
This module is designed to convert message to and from [Morse code](https://en.wikipedia.org/wiki/Morse_code). It provides one function which takes an English message and converts it to a Morse code string, separated by spaces and another function which takes the Morse code string and converts it to English.
### Exercise
- Add documentation to the `morse` module and to the `encode` and `decode` functions. Make sure you detail the inputs, outputs and give an example of their usage. Look at the tests to get an idea of how it works or try importing `morse` in the Python Console and have a play with the functions to understand them.
[<small>answer</small>](answer_final_morse_doc.ipynb)
### Exercise
- Add a test for the `decode` function to `test_morse.py` and check it passes with `pytest`
- Parametrise both tests to give several examples
- Make sure you include upper and lower case letters as well as checking what happens if you pass in empty strings
- Make sure to use `--doctest-modules` to run the documentation examples that you added in the last exercise
- Hint: When writing doctests, it cares whether your test output uses single or double quotes (`'` or `"`). Use single quotes for doctest outputs.
[<small>answer</small>](answer_final_morse_test.ipynb)
### Exercise
- What happens if you pass in the string `"Don't forget to save us"` to `encode`?
- Hint: The problem is caused by the `'` in the string
- Edit `morse.py` to raise a `ValueError` in this situation instead.
- Write a test to make sure that the `ValueError` is raised when a string with a `'` is passed in.
- Parametrise that test with some other examples including the `&` and `£` characters.
[<small>answer</small>](answer_final_morse_error.ipynb)
## Another cypher
Let's add another text cypher to our `crypto` package. This time we will implement the [Caesar Cipher](https://en.wikipedia.org/wiki/Caesar_cipher) or [ROT13](https://en.wikipedia.org/wiki/ROT13). Once more the module will provide `encode` and `decode` functions:
```
%%writefile rot13.py
import string
_lower_cipher = string.ascii_lowercase[13:] + string.ascii_lowercase[:13]
_upper_cipher = string.ascii_uppercase[13:] + string.ascii_uppercase[:13]
def encode(message):
output = []
for letter in message:
if letter in string.ascii_lowercase:
i = string.ascii_lowercase.find(letter)
output.append(_lower_cipher[i])
elif letter in string.ascii_uppercase:
i = string.ascii_uppercase.find(letter)
output.append(_upper_cipher[i])
return "".join(output)
def decode(message):
output = []
for letter in message:
if letter in _lower_cipher:
i = _lower_cipher.find(letter)
output.append(string.ascii_uppercase[i])
elif letter in _upper_cipher:
i = _upper_cipher.find(letter)
output.append(string.ascii_uppercase[i])
return "".join(output)
```
### Exercise
- Add documentation for the `rot13` module.
[<small>answer</small>](answer_final_rot13_doc.ipynb)
This time the tests are provided for you. Copy this into a new file called `test_rot13.py`:
```
%%writefile test_rot13.py
import pytest
from rot13 import encode, decode
@pytest.mark.parametrize("message, expected", [
("SECRET", "FRPERG"),
("secret", "frperg"),
])
def test_encode(message, expected):
assert encode(message) == expected
@pytest.mark.parametrize("message, expected", [
("FRPERG", "SECRET"),
("frperg", "secret"),
])
def test_decode(message, expected):
assert decode(message) == expected
def test_encode_spaces_error():
with pytest.raises(ValueError):
encode("Secret message for you")
```
When we run these tests with `pytest` we see that there are some passes and some failures:
```
!COLUMNS=60 pytest -v test_rot13.py
```
### Exercise
There are two failing tests:
1. `test_rot13.py::test_decode[frperg-secret]` is failing due to a bug in the code. Find the bug in `rot13.py` and fix it so that the test passes.
2. `test_rot13.py::test_encode_spaces_error` is failing due to a missing feature in our code. At the moment any spaces in the string are ignored. Change `encode` and `decode` in `rot13.py` so that they raise an error if any letter in the message is not found in the lookup string.
- Hint: You should add an `else` to the `if`/`elif` blocks
[<small>answer</small>](answer_final_rot13_fix.ipynb)
### Exercise
- Add a test to both `test_morse.py` and `test_rot13.py` which checks for "round-tripping". That is, check that a valid message which is passed to `encode` and then the output of that is passed to `decode` gets you back the original message.
- What types of messages do not round-trip correctly in `morse`? What could you do to the test to make it pass?
[<small>answer</small>](answer_final_rot13_roundtrip.ipynb)
| github_jupyter |
# Word Embeddings: Ungraded Practice Notebook
In this ungraded notebook, you'll try out all the individual techniques that you learned about in the lecture. Practicing on small examples will prepare you for the graded assignment, where you will combine the techniques in more advanced ways to create word embeddings from a real-life corpus.
This notebook is made of two main parts: data preparation, and the continuous bag-of-words (CBOW) model.
To get started, import and initialize all the libraries you will need.
```
import sys
!{sys.executable} -m pip install emoji
import re
import nltk
from nltk.tokenize import word_tokenize
import emoji
import numpy as np
from utils2 import get_dict
nltk.download('punkt') # download pre-trained Punkt tokenizer for English
```
# Data preparation
In the data preparation phase, starting with a corpus of text, you will:
- Clean and tokenize the corpus.
- Extract the pairs of context words and center word that will make up the training data set for the CBOW model. The context words are the features that will be fed into the model, and the center words are the target values that the model will learn to predict.
- Create simple vector representations of the context words (features) and center words (targets) that can be used by the neural network of the CBOW model.
## Cleaning and tokenization
To demonstrate the cleaning and tokenization process, consider a corpus that contains emojis and various punctuation signs.
```
corpus = 'Who ❤️ "word embeddings" in 2020? I do!!!'
```
First, replace all interrupting punctuation signs — such as commas and exclamation marks — with periods.
```
print(f'Corpus: {corpus}')
data = re.sub(r'[,!?;-]+', '.', corpus)
print(f'After cleaning punctuation: {data}')
```
Next, use NLTK's tokenization engine to split the corpus into individual tokens.
```
print(f'Initial string: {data}')
data = nltk.word_tokenize(data)
print(f'After tokenization: {data}')
```
Finally, as you saw in the lecture, get rid of numbers and punctuation other than periods, and convert all the remaining tokens to lowercase.
```
print(f'Initial list of tokens: {data}')
data = [ ch.lower() for ch in data
if ch.isalpha()
or ch == '.'
or emoji.get_emoji_regexp().search(ch)
]
print(f'After cleaning: {data}')
```
Note that the heart emoji is considered as a token just like any normal word.
Now let's streamline the cleaning and tokenization process by wrapping the previous steps in a function.
```
def tokenize(corpus):
data = re.sub(r'[,!?;-]+', '.', corpus)
data = nltk.word_tokenize(data) # tokenize string to words
data = [ ch.lower() for ch in data
if ch.isalpha()
or ch == '.'
or emoji.get_emoji_regexp().search(ch)
]
return data
```
Apply this function to the corpus that you'll be working on in the rest of this notebook: "I am happy because I am learning"
```
corpus = 'I am happy because I am learning'
print(f'Corpus: {corpus}')
words = tokenize(corpus)
print(f'Words (tokens): {words}')
```
**Now try it out yourself with your own sentence.**
```
tokenize("Now it's your turn: try with your own sentence!")
```
## Sliding window of words
Now that you have transformed the corpus into a list of clean tokens, you can slide a window of words across this list. For each window you can extract a center word and the context words.
The `get_windows` function in the next cell was introduced in the lecture.
```
def get_windows(words, C):
i = C
while i < len(words) - C:
center_word = words[i]
context_words = words[(i - C):i] + words[(i+1):(i+C+1)]
yield context_words, center_word
i += 1
```
The first argument of this function is a list of words (or tokens). The second argument, `C`, is the context half-size. Recall that for a given center word, the context words are made of `C` words to the left and `C` words to the right of the center word.
Here is how you can use this function to extract context words and center words from a list of tokens. These context and center words will make up the training set that you will use to train the CBOW model.
```
for x, y in get_windows(
['i', 'am', 'happy', 'because', 'i', 'am', 'learning'],
2
):
print(f'{x}\t{y}')
```
The first example of the training set is made of:
- the context words "i", "am", "because", "i",
- and the center word to be predicted: "happy".
**Now try it out yourself. In the next cell, you can change both the sentence and the context half-size.**
```
for x, y in get_windows(tokenize("Now it's your turn: try with your own sentence!"), 1):
print(f'{x}\t{y}')
```
## Transforming words into vectors for the training set
To finish preparing the training set, you need to transform the context words and center words into vectors.
### Mapping words to indices and indices to words
The center words will be represented as one-hot vectors, and the vectors that represent context words are also based on one-hot vectors.
To create one-hot word vectors, you can start by mapping each unique word to a unique integer (or index). We have provided a helper function, `get_dict`, that creates a Python dictionary that maps words to integers and back.
```
word2Ind, Ind2word = get_dict(words)
```
Here's the dictionary that maps words to numeric indices.
```
word2Ind
```
You can use this dictionary to get the index of a word.
```
print("Index of the word 'i': ",word2Ind['i'])
```
And conversely, here's the dictionary that maps indices to words.
```
Ind2word
print("Word which has index 2: ",Ind2word[2] )
```
Finally, get the length of either of these dictionaries to get the size of the vocabulary of your corpus, in other words the number of different words making up the corpus.
```
V = len(word2Ind)
print("Size of vocabulary: ", V)
```
### Getting one-hot word vectors
Recall from the lecture that you can easily convert an integer, $n$, into a one-hot vector.
Consider the word "happy". First, retrieve its numeric index.
```
n = word2Ind['happy']
n
```
Now create a vector with the size of the vocabulary, and fill it with zeros.
```
center_word_vector = np.zeros(V)
center_word_vector
```
You can confirm that the vector has the right size.
```
len(center_word_vector) == V
```
Next, replace the 0 of the $n$-th element with a 1.
```
center_word_vector[n] = 1
```
And you have your one-hot word vector.
```
center_word_vector
```
**You can now group all of these steps in a convenient function, which takes as parameters: a word to be encoded, a dictionary that maps words to indices, and the size of the vocabulary.**
```
def word_to_one_hot_vector(word, word2Ind, V):
# BEGIN your code here
one_hot_vector = np.zeros(V)
one_hot_vector[word2Ind[word]] = 1
# END your code here
return one_hot_vector
```
Check that it works as intended.
```
word_to_one_hot_vector('happy', word2Ind, V)
```
**What is the word vector for "learning"?**
```
# BEGIN your code here
word_to_one_hot_vector('learning', word2Ind, V)
# END your code here
```
Expected output:
array([0., 0., 0., 0., 1.])
### Getting context word vectors
To create the vectors that represent context words, you will calculate the average of the one-hot vectors representing the individual words.
Let's start with a list of context words.
```
context_words = ['i', 'am', 'because', 'i']
```
Using Python's list comprehension construct and the `word_to_one_hot_vector` function that you created in the previous section, you can create a list of one-hot vectors representing each of the context words.
```
context_words_vectors = [word_to_one_hot_vector(w, word2Ind, V) for w in context_words]
context_words_vectors
```
And you can now simply get the average of these vectors using numpy's `mean` function, to get the vector representation of the context words.
```
np.mean(context_words_vectors, axis=0)
```
Note the `axis=0` parameter that tells `mean` to calculate the average of the rows (if you had wanted the average of the columns, you would have used `axis=1`).
**Now create the `context_words_to_vector` function that takes in a list of context words, a word-to-index dictionary, and a vocabulary size, and outputs the vector representation of the context words.**
```
def context_words_to_vector(context_words, word2Ind, V):
# BEGIN your code here
context_words_vectors = [word_to_one_hot_vector(w, word2Ind, V) for w in context_words]
context_words_vectors = np.mean(context_words_vectors, axis=0)
# END your code here
return context_words_vectors
```
And check that you obtain the same output as the manual approach above.
```
context_words_to_vector(['i', 'am', 'because', 'i'], word2Ind, V)
```
**What is the vector representation of the context words "am happy i am"?**
```
# BEGIN your code here
context_words_to_vector(['am', 'happy', 'i', 'am'], word2Ind, V)
# END your code here
```
Expected output:
array([0.5 , 0. , 0.25, 0.25, 0. ])
## Building the training set
You can now combine the functions that you created in the previous sections, to build a training set for the CBOW model, starting from the following tokenized corpus.
```
words
```
To do this you need to use the sliding window function (`get_windows`) to extract the context words and center words, and you then convert these sets of words into a basic vector representation using `word_to_one_hot_vector` and `context_words_to_vector`.
```
for context_words, center_word in get_windows(words, 2): # reminder: 2 is the context half-size
print(f'Context words: {context_words} -> {context_words_to_vector(context_words, word2Ind, V)}')
print(f'Center word: {center_word} -> {word_to_one_hot_vector(center_word, word2Ind, V)}')
print()
```
In this practice notebook you'll be performing a single iteration of training using a single example, but in this week's assignment you'll train the CBOW model using several iterations and batches of example.
Here is how you would use a Python generator function (remember the `yield` keyword from the lecture?) to make it easier to iterate over a set of examples.
```
def get_training_example(words, C, word2Ind, V):
for context_words, center_word in get_windows(words, C):
yield context_words_to_vector(context_words, word2Ind, V), word_to_one_hot_vector(center_word, word2Ind, V)
```
The output of this function can be iterated on to get successive context word vectors and center word vectors, as demonstrated in the next cell.
```
for context_words_vector, center_word_vector in get_training_example(words, 2, word2Ind, V):
print(f'Context words vector: {context_words_vector}')
print(f'Center word vector: {center_word_vector}')
print()
```
Your training set is ready, you can now move on to the CBOW model itself.
# The continuous bag-of-words model
The CBOW model is based on a neural network, the architecture of which looks like the figure below, as you'll recall from the lecture.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='cbow_model_architecture.png?1' alt="alternate text" width="width" height="height" style="width:917;height:337;" /> Figure 1 </div>
This part of the notebook will walk you through:
- The two activation functions used in the neural network.
- Forward propagation.
- Cross-entropy loss.
- Backpropagation.
- Gradient descent.
- Extracting the word embedding vectors from the weight matrices once the neural network has been trained.
## Activation functions
Let's start by implementing the activation functions, ReLU and softmax.
### ReLU
ReLU is used to calculate the values of the hidden layer, in the following formulas:
\begin{align}
\mathbf{z_1} &= \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\
\mathbf{h} &= \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\
\end{align}
Let's fix a value for $\mathbf{z_1}$ as a working example.
```
np.random.seed(10)
z_1 = 10*np.random.rand(5, 1)-5
z_1
```
To get the ReLU of this vector, you want all the negative values to become zeros.
First create a copy of this vector.
```
h = z_1.copy()
```
Now determine which of its values are negative.
```
h < 0
```
You can now simply set all of the values which are negative to 0.
```
h[h < 0] = 0
```
And that's it: you have the ReLU of $\mathbf{z_1}$!
```
h
```
**Now implement ReLU as a function.**
```
def relu(z):
# BEGIN your code here
result = z.copy()
result[result < 0] = 0
# END your code here
return result
```
**And check that it's working.**
```
z = np.array([[-1.25459881], [ 4.50714306], [ 2.31993942], [ 0.98658484], [-3.4398136 ]])
relu(z)
```
Expected output:
array([[0. ],
[4.50714306],
[2.31993942],
[0.98658484],
[0. ]])
### Softmax
The second activation function that you need is softmax. This function is used to calculate the values of the output layer of the neural network, using the following formulas:
\begin{align}
\mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\
\mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\
\end{align}
To calculate softmax of a vector $\mathbf{z}$, the $i$-th component of the resulting vector is given by:
$$ \textrm{softmax}(\textbf{z})_i = \frac{e^{z_i} }{\sum\limits_{j=1}^{V} e^{z_j} } \tag{5} $$
Let's work through an example.
```
z = np.array([9, 8, 11, 10, 8.5])
z
```
You'll need to calculate the exponentials of each element, both for the numerator and for the denominator.
```
e_z = np.exp(z)
e_z
```
The denominator is equal to the sum of these exponentials.
```
sum_e_z = np.sum(e_z)
sum_e_z
```
And the value of the first element of $\textrm{softmax}(\textbf{z})$ is given by:
```
e_z[0]/sum_e_z
```
This is for one element. You can use numpy's vectorized operations to calculate the values of all the elements of the $\textrm{softmax}(\textbf{z})$ vector in one go.
**Implement the softmax function.**
```
def softmax(z):
# BEGIN your code here
e_z = np.exp(z)
sum_e_z = np.sum(e_z)
return e_z / sum_e_z
# END your code here
```
**Now check that it works.**
```
softmax([9, 8, 11, 10, 8.5])
```
Expected output:
array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223])
## Dimensions: 1-D arrays vs 2-D column vectors
Before moving on to implement forward propagation, backpropagation, and gradient descent, let's have a look at the dimensions of the vectors you've been handling until now.
Create a vector of length $V$ filled with zeros.
```
x_array = np.zeros(V)
x_array
```
This is a 1-dimensional array, as revealed by the `.shape` property of the array.
```
x_array.shape
```
To perform matrix multiplication in the next steps, you actually need your column vectors to be represented as a matrix with one column. In numpy, this matrix is represented as a 2-dimensional array.
The easiest way to convert a 1D vector to a 2D column matrix is to set its `.shape` property to the number of rows and one column, as shown in the next cell.
```
x_column_vector = x_array.copy()
x_column_vector.shape = (V, 1) # alternatively ... = (x_array.shape[0], 1)
x_column_vector
```
The shape of the resulting "vector" is:
```
x_column_vector.shape
```
So you now have a 5x1 matrix that you can use to perform standard matrix multiplication.
## Forward propagation
Let's dive into the neural network itself, which is shown below with all the dimensions and formulas you'll need.
<div style="width:image width px; font-size:100%; text-align:center;"><img src='cbow_model_dimensions_single_input.png?2' alt="alternate text" width="width" height="height" style="width:839;height:349;" /> Figure 2 </div>
Set $N$ equal to 3. Remember that $N$ is a hyperparameter of the CBOW model that represents the size of the word embedding vectors, as well as the size of the hidden layer.
```
N = 3
```
### Initialization of the weights and biases
Before you start training the neural network, you need to initialize the weight matrices and bias vectors with random values.
In the assignment you will implement a function to do this yourself using `numpy.random.rand`. In this notebook, we've pre-populated these matrices and vectors for you.
```
W1 = np.array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.26637602, -0.23846886, -0.37770863, -0.11399446, 0.34008124]])
W2 = np.array([[-0.22182064, -0.43008631, 0.13310965],
[ 0.08476603, 0.08123194, 0.1772054 ],
[ 0.1871551 , -0.06107263, -0.1790735 ],
[ 0.07055222, -0.02015138, 0.36107434],
[ 0.33480474, -0.39423389, -0.43959196]])
b1 = np.array([[ 0.09688219],
[ 0.29239497],
[-0.27364426]])
b2 = np.array([[ 0.0352008 ],
[-0.36393384],
[-0.12775555],
[-0.34802326],
[-0.07017815]])
```
**Check that the dimensions of these matrices match those shown in the figure above.**
```
# BEGIN your code here
print(f'V (vocabulary size): {V}')
print(f'N (embedding size / size of the hidden layer): {N}')
print(f'size of W1: {W1.shape} (NxV)')
print(f'size of b1: {b1.shape} (Nx1)')
print(f'size of W2: {W1.shape} (VxN)')
print(f'size of b2: {b2.shape} (Vx1)')
# END your code here
```
### Training example
Run the next cells to get the first training example, made of the vector representing the context words "i am because i", and the target which is the one-hot vector representing the center word "happy".
> You don't need to worry about the Python syntax, but there are some explanations below if you want to know what's happening behind the scenes.
```
training_examples = get_training_example(words, 2, word2Ind, V)
```
> `get_training_examples`, which uses the `yield` keyword, is known as a generator. When run, it builds an iterator, which is a special type of object that... you can iterate on (using a `for` loop for instance), to retrieve the successive values that the function generates.
>
> In this case `get_training_examples` `yield`s training examples, and iterating on `training_examples` will return the successive training examples.
```
x_array, y_array = next(training_examples)
```
> `next` is another special keyword, which gets the next available value from an iterator. Here, you'll get the very first value, which is the first training example. If you run this cell again, you'll get the next value, and so on until the iterator runs out of values to return.
>
> In this notebook `next` is used because you will only be performing one iteration of training. In this week's assignment with the full training over several iterations you'll use regular `for` loops with the iterator that supplies the training examples.
The vector representing the context words, which will be fed into the neural network, is:
```
x_array
```
The one-hot vector representing the center word to be predicted is:
```
y_array
```
Now convert these vectors into matrices (or 2D arrays) to be able to perform matrix multiplication on the right types of objects, as explained above.
```
x = x_array.copy()
x.shape = (V, 1)
print('x')
print(x)
print()
y = y_array.copy()
y.shape = (V, 1)
print('y')
print(y)
```
### Values of the hidden layer
Now that you have initialized all the variables that you need for forward propagation, you can calculate the values of the hidden layer using the following formulas:
\begin{align}
\mathbf{z_1} = \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\
\mathbf{h} = \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\
\end{align}
First, you can calculate the value of $\mathbf{z_1}$.
```
z1 = np.dot(W1, x) + b1
```
> `np.dot` is numpy's function for matrix multiplication.
As expected you get an $N$ by 1 matrix, or column vector with $N$ elements, where $N$ is equal to the embedding size, which is 3 in this example.
```
z1
```
You can now take the ReLU of $\mathbf{z_1}$ to get $\mathbf{h}$, the vector with the values of the hidden layer.
```
h = relu(z1)
h
```
Applying ReLU means that the negative element of $\mathbf{z_1}$ has been replaced with a zero.
### Values of the output layer
Here are the formulas you need to calculate the values of the output layer, represented by the vector $\mathbf{\hat y}$:
\begin{align}
\mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\
\mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\
\end{align}
**First, calculate $\mathbf{z_2}$.**
```
# BEGIN your code here
z2 = np.dot(W2, h) + b2
# END your code here
z2
```
Expected output:
array([[-0.31973737],
[-0.28125477],
[-0.09838369],
[-0.33512159],
[-0.19919612]])
This is a $V$ by 1 matrix, where $V$ is the size of the vocabulary, which is 5 in this example.
**Now calculate the value of $\mathbf{\hat y}$.**
```
# BEGIN your code here
y_hat = softmax(z2)
# END your code here
y_hat
```
Expected output:
array([[0.18519074],
[0.19245626],
[0.23107446],
[0.18236353],
[0.20891502]])
As you've performed the calculations with random matrices and vectors (apart from the input vector), the output of the neural network is essentially random at this point. The learning process will adjust the weights and biases to match the actual targets better.
**That being said, what word did the neural network predict?**
<details>
<summary>
<font size="3" color="darkgreen"><b>Solution</b></font>
</summary>
<p>The neural network predicted the word "happy": the largest element of $\mathbf{\hat y}$ is the third one, and the third word of the vocabulary is "happy".</p>
<p>Here's how you could implement this in Python:</p>
<p><code>print(Ind2word[np.argmax(y_hat)])</code></p>
Well done, you've completed the forward propagation phase!
## Cross-entropy loss
Now that you have the network's prediction, you can calculate the cross-entropy loss to determine how accurate the prediction was compared to the actual target.
> Remember that you are working on a single training example, not on a batch of examples, which is why you are using *loss* and not *cost*, which is the generalized form of loss.
First let's recall what the prediction was.
```
y_hat
```
And the actual target value is:
```
y
```
The formula for cross-entropy loss is:
$$ J=-\sum\limits_{k=1}^{V}y_k\log{\hat{y}_k} \tag{6}$$
**Implement the cross-entropy loss function.**
Here are a some hints if you're stuck.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hint 1</b></font>
</summary>
<p>To multiply two numpy matrices (such as <code>y</code> and <code>y_hat</code>) element-wise, you can simply use the <code>*</code> operator.</p>
<details>
<summary>
<font size="3" color="darkgreen"><b>Hint 2</b></font>
</summary>
<p>Once you have a vector equal to the element-wise multiplication of <code>y</code> and <code>y_hat</code>, you can use <code>np.sum</code> to calculate the sum of the elements of this vector.</p>
```
def cross_entropy_loss(y_predicted, y_actual):
# BEGIN your code here
loss = np.sum(-np.log(y_hat)*y)
# END your code here
return loss
```
**Now use this function to calculate the loss with the actual values of $\mathbf{y}$ and $\mathbf{\hat y}$.**
```
cross_entropy_loss(y_hat, y)
```
Expected output:
1.4650152923611106
This value is neither good nor bad, which is expected as the neural network hasn't learned anything yet.
The actual learning will start during the next phase: backpropagation.
## Backpropagation
The formulas that you will implement for backpropagation are the following.
\begin{align}
\frac{\partial J}{\partial \mathbf{W_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}\\
\frac{\partial J}{\partial \mathbf{W_2}} &= (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}\\
\frac{\partial J}{\partial \mathbf{b_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}\\
\frac{\partial J}{\partial \mathbf{b_2}} &= \mathbf{\hat{y}} - \mathbf{y} \tag{10}
\end{align}
> Note: these formulas are slightly simplified compared to the ones in the lecture as you're working on a single training example, whereas the lecture provided the formulas for a batch of examples. In the assignment you'll be implementing the latter.
Let's start with an easy one.
**Calculate the partial derivative of the loss function with respect to $\mathbf{b_2}$, and store the result in `grad_b2`.**
$$\frac{\partial J}{\partial \mathbf{b_2}} = \mathbf{\hat{y}} - \mathbf{y} \tag{10}$$
```
# BEGIN your code here
grad_b2 = y_hat - y
# END your code here
grad_b2
```
Expected output:
array([[ 0.18519074],
[ 0.19245626],
[-0.76892554],
[ 0.18236353],
[ 0.20891502]])
**Next, calculate the partial derivative of the loss function with respect to $\mathbf{W_2}$, and store the result in `grad_W2`.**
$$\frac{\partial J}{\partial \mathbf{W_2}} = (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}$$
> Hint: use `.T` to get a transposed matrix, e.g. `h.T` returns $\mathbf{h^\top}$.
```
# BEGIN your code here
grad_W2 = np.dot(y_hat - y, h.T)
# END your code here
grad_W2
```
Expected output:
array([[ 0.06756476, 0.11798563, 0. ],
[ 0.0702155 , 0.12261452, 0. ],
[-0.28053384, -0.48988499, 0. ],
[ 0.06653328, 0.1161844 , 0. ],
[ 0.07622029, 0.13310045, 0. ]])
**Now calculate the partial derivative with respect to $\mathbf{b_1}$ and store the result in `grad_b1`.**
$$\frac{\partial J}{\partial \mathbf{b_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}$$
```
# BEGIN your code here
grad_b1 = relu(np.dot(W2.T, y_hat - y))
# END your code here
grad_b1
```
Expected output:
array([[0. ],
[0. ],
[0.17045858]])
**Finally, calculate the partial derivative of the loss with respect to $\mathbf{W_1}$, and store it in `grad_W1`.**
$$\frac{\partial J}{\partial \mathbf{W_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}$$
```
# BEGIN your code here
grad_W1 = np.dot(relu(np.dot(W2.T, y_hat - y)), x.T)
# END your code here
grad_W1
```
Expected output:
array([[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0.04261464, 0.04261464, 0. , 0.08522929, 0. ]])
Before moving on to gradient descent, double-check that all the matrices have the expected dimensions.
```
# BEGIN your code here
print(f'V (vocabulary size): {V}')
print(f'N (embedding size / size of the hidden layer): {N}')
print(f'size of grad_W1: {grad_W1.shape} (NxV)')
print(f'size of grad_b1: {grad_b1.shape} (Nx1)')
print(f'size of grad_W2: {grad_W1.shape} (VxN)')
print(f'size of grad_b2: {grad_b2.shape} (Vx1)')
# END your code here
```
## Gradient descent
During the gradient descent phase, you will update the weights and biases by subtracting $\alpha$ times the gradient from the original matrices and vectors, using the following formulas.
\begin{align}
\mathbf{W_1} &:= \mathbf{W_1} - \alpha \frac{\partial J}{\partial \mathbf{W_1}} \tag{11}\\
\mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\
\mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\
\mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\
\end{align}
First, let set a value for $\alpha$.
```
alpha = 0.03
```
The updated weight matrix $\mathbf{W_1}$ will be:
```
W1_new = W1 - alpha * grad_W1
```
Let's compare the previous and new values of $\mathbf{W_1}$:
```
print('old value of W1:')
print(W1)
print()
print('new value of W1:')
print(W1_new)
```
The difference is very subtle (hint: take a closer look at the last row), which is why it takes a fair amount of iterations to train the neural network until it reaches optimal weights and biases starting from random values.
**Now calculate the new values of $\mathbf{W_2}$ (to be stored in `W2_new`), $\mathbf{b_1}$ (in `b1_new`), and $\mathbf{b_2}$ (in `b2_new`).**
\begin{align}
\mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\
\mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\
\mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\
\end{align}
```
# BEGIN your code here
W2_new = W2 - alpha * grad_W2
b1_new = b1 - alpha * grad_b1
b2_new = b2 - alpha * grad_b2
# END your code here
print('W2_new')
print(W2_new)
print()
print('b1_new')
print(b1_new)
print()
print('b2_new')
print(b2_new)
```
Expected output:
W2_new
[[-0.22384758 -0.43362588 0.13310965]
[ 0.08265956 0.0775535 0.1772054 ]
[ 0.19557112 -0.04637608 -0.1790735 ]
[ 0.06855622 -0.02363691 0.36107434]
[ 0.33251813 -0.3982269 -0.43959196]]
b1_new
[[ 0.09688219]
[ 0.29239497]
[-0.27875802]]
b2_new
[[ 0.02964508]
[-0.36970753]
[-0.10468778]
[-0.35349417]
[-0.0764456 ]]
Congratulations, you have completed one iteration of training using one training example!
You'll need many more iterations to fully train the neural network, and you can optimize the learning process by training on batches of examples, as described in the lecture. You will get to do this during this week's assignment.
## Extracting word embedding vectors
Once you have finished training the neural network, you have three options to get word embedding vectors for the words of your vocabulary, based on the weight matrices $\mathbf{W_1}$ and/or $\mathbf{W_2}$.
### Option 1: extract embedding vectors from $\mathbf{W_1}$
The first option is to take the columns of $\mathbf{W_1}$ as the embedding vectors of the words of the vocabulary, using the same order of the words as for the input and output vectors.
> Note: in this practice notebook the values of the word embedding vectors are meaningless after a single iteration with just one training example, but here's how you would proceed after the training process is complete.
For example $\mathbf{W_1}$ is this matrix:
```
W1
```
The first column, which is a 3-element vector, is the embedding vector of the first word of your vocabulary. The second column is the word embedding vector for the second word, and so on.
The first, second, etc. words are ordered as follows.
```
for i in range(V):
print(Ind2word[i])
```
So the word embedding vectors corresponding to each word are:
```
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W1[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
```
### Option 2: extract embedding vectors from $\mathbf{W_2}$
The second option is to take $\mathbf{W_2}$ transposed, and take its columns as the word embedding vectors just like you did for $\mathbf{W_1}$.
```
W2.T
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W2.T[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
```
### Option 3: extract embedding vectors from $\mathbf{W_1}$ and $\mathbf{W_2}$
The third option, which is the one you will use in this week's assignment, uses the average of $\mathbf{W_1}$ and $\mathbf{W_2^\top}$.
**Calculate the average of $\mathbf{W_1}$ and $\mathbf{W_2^\top}$, and store the result in `W3`.**
```
# BEGIN your code here
W3 = (W1+W2.T)/2
# END your code here
W3
```
Expected output:
array([[ 0.09752647, 0.08665397, -0.02389858, 0.1768788 , 0.3764029 ],
[-0.05136565, 0.15459171, -0.15029611, 0.19580601, -0.31673866],
[ 0.19974284, -0.03063173, -0.27839106, 0.12353994, -0.04975536]])
Extracting the word embedding vectors works just like the two previous options, by taking the columns of the matrix you've just created.
```
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W3[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
```
You're now ready to take on this week's assignment!
### How this practice relates to and differs from the upcoming graded assignment
- In the assignment, for each iteration of training you will use batches of examples instead of a single example. The formulas for forward propagation and backpropagation will be modified accordingly, and you will use cross-entropy cost instead of cross-entropy loss.
- You will also complete several iterations of training, until you reach an acceptably low cross-entropy cost, at which point you can extract good word embeddings from the weight matrices.
- After extracting the word embedding vectors, you will use principal component analysis (PCA) to visualize the vectors, which will enable you to perform an intrinsic evaluation of the quality of the vectors, as explained in the lecture.
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
# Any results you write to the current directory are saved as output.
train = pd.read_csv("../data/dataset/train.csv").fillna("no_comments")
test = pd.read_csv("../data/dataset/test.csv")
train = train[train.label != 'unrelated']
train['spn_1_hash'] = train['title1_zh']
train['spn_2_hash'] = train['title2_zh']
train['label'] = (train['label'] != 'disagreed').astype(int)
train['label'].value_counts()
def same_order(a, b):
return a, b
from collections import defaultdict
graph = defaultdict(set)
pos_edges = set()
neg_edges = set()
edges = set()
def build_graph(row):
node_1 = row['spn_1_hash']
node_2 = row['spn_2_hash']
label = row['label']
graph[node_2].add(node_1) # node_2 must connect to node_1
if label:
# in this case, it's bidirectional
pos_edges.add(same_order(node_1, node_2))
pos_edges.add(same_order(node_2, node_1))
graph[node_1].add(node_2)
else:
neg_edges.add(same_order(node_2, node_1))
edges.add(same_order(node_2, node_1))
n = train[['spn_1_hash', 'spn_2_hash', 'label']].apply(build_graph, axis=1)
pos_augments = set()
neg_augments = set()
def add_pos_edges():
counter = 0
ncc = 0
pos_counter = 0
tricky_pairs = set()
# TRIANGLE CASE, A->B and B->C and A->C, if A=B, will A=C and B=C ?
for src, dst in pos_edges:
src_point_to = graph[src]
dst_point_to = graph[dst]
src_dst_both_point_to = src_point_to.intersection(dst_point_to) # A point to C, and also B point to C
if len(src_dst_both_point_to) == 0:
ncc += 1
for v in src_dst_both_point_to:
if (dst, v) in pos_edges:
if (src, v) in pos_edges:
pos_counter += 1
else:
print("POS-Tricky", src, "|", v)
tricky_pairs.add(same_order(src, v))
counter += 1
print("pos_counter", pos_counter, "counter", counter)
print("Triangle case", pos_counter / counter)
print("NCC", ncc)
def add_neg_edges():
counter = 0
neg_counter = 0
tricky_pairs = set()
for src, dst in neg_edges:
src_point_to = graph[src]
dst_point_to = graph[dst]
src_dst_both_point_to = src_point_to.intersection(dst_point_to)
for v in src_dst_both_point_to:
if (dst, v) in pos_edges:
if (src, v) in neg_edges:
neg_counter += 1
else:
print("NEG-Tricky", src, "|", v)
tricky_pairs.add(same_order(src, v))
counter += 1
print('neg_counter', neg_counter, 'counter', counter)
print("Triangle case", neg_counter / counter)
def augments():
counter = 0
neg_counter = 0
for src, dst in pos_edges:
src_point_to = graph[src]
dst_point_to = graph[dst]
dst_outs = dst_point_to - src_point_to
for v in dst_outs:
if (dst, v) in pos_edges:
pos_augments.add((src, v))
for src, dst in neg_edges:
src_point_to = graph[src]
dst_point_to = graph[dst]
dst_outs = dst_point_to - src_point_to
for v in dst_outs:
if (dst, v) in pos_edges:
neg_augments.add((src, v))
print("Augmented pos cases", len(pos_augments))
print("Augmented neg cases", len(neg_augments))
def add_augmented_links():
for src, dst in pos_augments:
graph[src].add(dst)
pos_edges.add(same_order(src, dst))
edges.add(same_order(src, dst))
for src, dst in neg_augments:
graph[src].add(dst)
neg_edges.add(same_order(src, dst))
edges.add(same_order(src, dst))
i = 0
while(True):
i += 1
print("In interation", i)
pos_size = len(pos_augments)
neg_size = len(neg_augments)
add_pos_edges()
add_neg_edges()
augments()
add_augmented_links()
if len(pos_augments) == pos_size and len(neg_augments) == neg_size:
print("Finished")
break
test = pd.read_csv("../data/dataset/test.csv")
neg_samples = set([v[0] for v in neg_augments])
def mark_neg(row):
if (row['title2_zh'], row['title1_zh']) in neg_augments:
return 1
return 0
def mark_pos(row):
if (row['title1_zh'], row['title2_zh']) in pos_augments:
return 1
if (row['title2_zh'], row['title1_zh']) in pos_augments:
return 1
return 0
def mark(row):
if (row['title1_zh'], row['title2_zh']) in pos_augments:
return 'agreed'
if (row['title2_zh'], row['title1_zh']) in pos_augments:
return 'agreed'
if (row['title2_zh'], row['title1_zh']) in neg_augments:
return 'disagreed'
return 'failed'
test['mark_neg'] = test.apply(lambda row: mark_neg(row), axis=1)
test['mark_pos'] = test.apply(lambda row: mark_pos(row), axis=1)
test['deal_with_the_devil'] = test.apply(lambda row: mark(row), axis=1)
test['deal_with_the_devil'].value_counts()
test['mark_neg'].sum()
test['mark_pos'].sum()
best_predictions = pd.read_csv('../data/high_ground/final_answer.csv')
```
# Check improvement
```
labeled = test['deal_with_the_devil'] != 'failed'
best_predictions[labeled]['Category'].value_counts()
print("Difference", (best_predictions[labeled]['Category'] != test[labeled]['deal_with_the_devil']).sum())
best_predictions['Category'].value_counts()
best_predictions['Category'].value_counts() / len(best_predictions)
len((test[labeled]['deal_with_the_devil']).values)
best_predictions['Fake'] = test['deal_with_the_devil'].values
def deal_with_the_devil(row):
if row['Fake'] != 'failed' and row['Fake'] != row['Category']:
return row['Fake']
return row['Category']
best_predictions['Category'] = best_predictions[['Category', 'Fake']].apply(lambda row: deal_with_the_devil(row), axis=1)
best_predictions = best_predictions.drop(['Fake'], axis=1)
best_predictions.to_csv("../data/high_ground/final_answer.csv", index=False)
best_predictions
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.