text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Release 0.3.0 with moving metrics!
> New release of runpandas comes with new features and improved docs!
- toc: false
- badges: true
- comments: true
- author: Marcel Caraciolo
- categories: [general, jupyter, releases]
- image: images/speed.png
> This current state of the project is `early beta`, which means that features can be added, removed or changed in backwards incompatible ways.
We are very excited to announce [RunPandas 0.3](https://pypi.org/project/runpandas/). This release comes with new features and fixes, let's highlight them:
- Support to moving metrics, with the capability of detecting periods of inactivity.
- Support to compute some running general statistics such as total time elapsed and moving time elapsed.
- Support to imputated statistics: speed in m/s and total distance and distance per position.
- Added [Zenodo](https://zenodo.org/) DOI badge
## What is Runpandas?
Runpandas is a python package based on ``pandas`` data analysis library, that makes it easier to perform data analysis from your running sessions stored at tracking files from cellphones and GPS smartwatches or social sports applications such as Strava, MapMyRUn, NikeRunClub, etc. It is designed to enable reading, transforming and running metrics analytics from several tracking files and apps.
## Main Features
### Support to calculated running metrics: total elapsed time, speed and total distance
The `Activity` dataframe now contains special properties that presents some statistics from the workout such as elapsed time, speed and the distance of workout in meters.
```
#Disable INFO Logging for a better visualization
import logging
logging.getLogger().setLevel(logging.CRITICAL)
# !pip install runpandas
import runpandas as rpd
activity = rpd.read_file('./data/sample.tcx')
```
The total ellapsed time is the duration from the moment you hit start on your device until the moment you finish the activity. The total distance is the total of meters ran by the athetle in the activity. The speed is measured in meters per second, and returns a ``runpandas.MeasureSeries.Speed`` series with the ratio of the distance traveled per record and the number of seconds to run it.
Occasionally, some observations such as speed, distance and others must be calculated based on available data in the given activity. In runpandas there are special accessors (`runpandas.acessors`) that computes some of these metrics. We will compute the `speed` and the `distance per position` observations using the latitude and longitude for each record and calculate the haversine distance in meters and the speed in meters per second.
```
#total time elapsed for the activity
print(activity.ellapsed_time)
#distance of workout in meters
print(activity.distance)
#compute the distance using haversine formula between two consecutive latitude, longitudes observations.
activity['distpos'] = activity.compute.distance()
activity['distpos'].head()
#compute the speed using the distance per position and the time recorded in seconds to run it.
activity['speed'] = activity.compute.speed(from_distances=True)
activity['speed'].head()
```
In runpandas we will also have special atributes at the ``runpandas.MeasureSeries`` that can compute transformations such as speed conversion from m/s to km/h.
```
#kph property that converts m/s to km/h.
activity['speed'].kph
```
### Support to detection of periods of inactivity (Moving time)
With the advent of the advanced tracking devices, they are capable of estimating the time that the runner was active. Then new devices can now calculate the moving time based on the GPS locations, distance, and speed of the activity. There are cases that the athlete can also use the pause button to deliberately pause the activity for any reason (stoplights, active rests, bathroom stops or even stopping for photos).
Runpandas will attempt to calculate based on the metrics available in the activity the moving time by detecting all the periods of inactivity. The formula is based on the speed per record (distance recorded) below a specified threshold. It is a powerful metric that the runner can now know to see his real performance, removing any bias related to stopped periods. This metric is quite popular also in several tracking platforms such as Garmin and Strava.
With the new dataframe auxiliar method ``Activity.only_moving``, runpandas detects the periods of inactivity and returns the `moving` series containing all the observations considered to be stopped. It returns a ``runpandas.Activity`` dataframe with a special column named ``moving`` indexed by the Activity's TimeIndex. It is ``pandas.Series`` containing a vector of booleans which indicates the stopped periods. Boolean indexing it will help build quick filters to ignore any observations considered by the algorithm as a inactivity.
```
activity_only_moving = activity.only_moving()
print(activity_only_moving['moving'].head())
```
Now we can compute the stopped time and the moving time.
```
print('The stopped period:', activity_only_moving[activity_only_moving['moving'] == False].index.sum())
print('The moving time:', activity_only_moving.moving_time)
```
### What is coming next ?
We will load several running metrics and statistics to our activities and measure series in order to provide the user deeper details about their running activities. It will includes heart time zones, average speed, personal best records per distance, and more!
### Thanks
We are constantly developing Runpandas improving its existing features and adding new ones. We will be glad to hear from you about what you like or don’t like, what features you may wish to see in upcoming releases. Please feel free to contact us.
| github_jupyter |
# Plot average key rank against $p$-value
```
import math
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from os import path
from matplotlib.patheffects import withStroke
from matplotlib.pyplot import text, locator_params
from matplotlib.ticker import LinearLocator, LogLocator, MaxNLocator, AutoLocator, FixedLocator
from src.pollution.tools import file_suffix
from src.tools.plotter import store_sns, init_plots, TVLA_PALETTE
from src.trace_set.database import Database
from src.trace_set.pollution import PollutionType
init_plots()
SCA_PALETTE = sns.light_palette(sns.color_palette()[2], n_colors=5)
DLLA_PALETTE = sns.light_palette(sns.color_palette()[0], n_colors=5)
methods = {
"sca_hw": ("Profiled SCA ($\overline{kr}$)", SCA_PALETTE[3], "-"),
"dlla9": ("DL-LA 9 class ($p$)", DLLA_PALETTE[3], "-"),
"dlla2": ("Wegener DL-LA ($p$)", DLLA_PALETTE[2], "--"),
}
THRESHOLD_COLOR = "#FF000080"
Y_TICKS = 5
def fetch_results(db: Database, pollution: PollutionType, num_traces: str, dir_name = ""):
suffix = ""
if num_traces is not None:
suffix = f"_{num_traces}"
file_name = path.join(dir_name, f"results/{db.name}{suffix}.csv")
df = pd.read_csv(file_name, sep=";")
df = df[df.pollution == pollution.name].drop(columns=[df.pollution.name])
gdf = df.groupby(df.method)
return gdf, df
DB = Database.ascad
POLLUTION_TYPE = None # PollutionType.gauss
POLLUTION_PARAM = 5
POLL_TITLE = ""
if POLLUTION_TYPE:
POLL_LOOKUP = {
PollutionType.gauss: f", Gaussian noise ({POLLUTION_PARAM})"
}
POLL_TITLE = POLL_LOOKUP[POLLUTION_TYPE]
FILE_SUFFIX = file_suffix(POLLUTION_TYPE, POLLUTION_PARAM)
TITLE = f"TVLA confidence values against SCA guessing entropy\nASCAD - Masked{POLL_TITLE}"
HIGH_PLOT = True
def plot_p_value():
df = pd.read_csv(f"../3-performance/gradients/tvla-p-gradient.csv")
df = df.drop(columns=[df.columns[0]])
df.columns = [f"{c} ($p$)" for c in df.columns]
num_traces = max(df.index)
g = sns.lineplot(data=df, palette=[TVLA_PALETTE[4], TVLA_PALETTE[3], TVLA_PALETTE[2]])
g.set(yscale="log",
ylabel="Confidence value ($p$)",
xlabel="Number of attack traces",
title=TITLE,
ylim=(10 ** 0, 10 ** -25), xlim=(0, num_traces + 1))
g.yaxis.set_major_locator(FixedLocator(10. ** (-np.arange(0, 30, 5))))
return g, num_traces
def plot_kr():
df = pd.read_csv(f"results/sca-ge-{DB.name}{FILE_SUFFIX}.csv")
df = df.drop(columns=[df.columns[0]])
df.columns = [f"{c} ($\overline{{kr}}$)" for c in df.columns]
num_traces = max(df.index)
g = sns.lineplot(data=df, palette=[SCA_PALETTE[3]])
if HIGH_PLOT:
g.set(xlim=(0,40000), ylim=(0,150), ylabel="Average key rank ($\overline{kr}$)")
g.yaxis.set_major_locator(FixedLocator(np.arange(0, 151, 30)))
else:
g.set(xlim=(0,40000), ylim=(0,50), ylabel="Average key rank ($\overline{kr}$)")
g.yaxis.set_major_locator(FixedLocator(np.arange(0, 51, 10)))
return g
def plot_threshold(g_p, g_kr, num_traces, p_thresh=10**-5):
# Compute positioning
max_kr = g_kr.get_ylim()[1]
lin_max_p = abs(math.log(g_p.get_ylim()[1], 10))
lin_p_thresh = abs(math.log(p_thresh, 10))
line_loc = lin_p_thresh / lin_max_p * max_kr
# Plot threshold
t_line = [line_loc] * num_traces
sns.lineplot(data={"Threshold ($p$)": t_line}, palette=[THRESHOLD_COLOR], dashes=[(3, 1)])
# Threshold text positioning
if HIGH_PLOT:
up_margin = 2.5
down_margin = -1.5
else:
up_margin = 1
down_margin = -.5
# Display threshold text
t_up = text(0, line_loc + up_margin, '$\\uparrow$leakage', color=THRESHOLD_COLOR, size="small")
t_down = text(0, line_loc + down_margin, '$\\downarrow$no leakage', va='top', color=THRESHOLD_COLOR, size="small")
# Threshold text markup
stroke = [withStroke(linewidth=2, foreground='w')]
t_up.set_path_effects(stroke)
t_down.set_path_effects(stroke)
KR_THRESH = 10
def plot():
g, num_traces = plot_p_value()
g.get_legend().remove()
g2 = plt.twinx()
plot_kr()
plot_threshold(g, g2, num_traces)
handles1, labels1 = g.get_legend_handles_labels()
handles2, labels2 = g2.get_legend_handles_labels()
g2.legend(handles1 + handles2, labels1 + labels2, loc="upper right")
store_sns(g, f"tvla-vs-sca")
plot()
```
| github_jupyter |
```
import plotly
import plotly.graph_objs as go
import ipywidgets as widgets
import pandas as pd
df = pd.read_csv("../data/processed/part_of_speech_total.csv")
artista = {
'nach': "Nach",
'residente': "Residente",
}
df['artista'] = df['artista'].map(artista)
df.head()
df['pos'].unique()
condition_pos = ((df['pos'] == 'sustantivo') |
(df['pos'] == 'verbo') |
(df['pos'] == 'nombre propio') |
(df['pos'] == 'adjetivo') |
(df['pos'] == 'pronombre')
)
df = df[condition_pos].copy()
artist_data = df
pos_top = ['sustantivo', 'verbo', 'nombre propio', 'adjetivo', 'pronombre']
colors = ["#074a7e", "#dc0d12", "#e4a100", "#02a3cd", "#dc0d7a",]
angle = 90/5
theta = [angle*0.5,
angle*1.5,
angle*2.5,
angle*3.5,
angle*4.5]
fig_artist = go.FigureWidget(data = [
go.Barpolar(theta = [angle*4], r = [artist_data[artist_data['pos'] == pos_top[4]]['cuenta'].iloc[0]], name = pos_top[4], width = 18, marker = dict(color = colors[0])),
go.Barpolar(theta = [angle*3], r = [artist_data[artist_data['pos'] == pos_top[3]]['cuenta'].iloc[0]], name = pos_top[3], width = 18, marker = dict(color = colors[1])),
go.Barpolar(theta = [angle*2], r = [artist_data[artist_data['pos'] == pos_top[2]]['cuenta'].iloc[0]], name = pos_top[2], width = 18, marker = dict(color = colors[2])),
go.Barpolar(theta = [angle*1], r = [artist_data[artist_data['pos'] == pos_top[1]]['cuenta'].iloc[0]], name = pos_top[1], width = 18, marker = dict(color = colors[3])),
go.Barpolar(theta = [angle*0], r = [artist_data[artist_data['pos'] == pos_top[0]]['cuenta'].iloc[0]], name = pos_top[0], width = 18, marker = dict(color = colors[4])),
go.Barpolar(theta = [angle*4 +180], r = [artist_data[artist_data['pos'] == pos_top[4]]['cuenta'].iloc[1]], name = pos_top[4], width = 18, marker = dict(color = colors[0])),
go.Barpolar(theta = [angle*3 +180], r = [artist_data[artist_data['pos'] == pos_top[3]]['cuenta'].iloc[1]], name = pos_top[3], width = 18, marker = dict(color = colors[1])),
go.Barpolar(theta = [angle*2 +180], r = [artist_data[artist_data['pos'] == pos_top[2]]['cuenta'].iloc[1]], name = pos_top[2], width = 18, marker = dict(color = colors[2])),
go.Barpolar(theta = [angle*1 +180], r = [artist_data[artist_data['pos'] == pos_top[1]]['cuenta'].iloc[1]], name = pos_top[1], width = 18, marker = dict(color = colors[3])),
go.Barpolar(theta = [angle*0 +180], r = [artist_data[artist_data['pos'] == pos_top[0]]['cuenta'].iloc[1]], name = pos_top[0], width = 18, marker = dict(color = colors[4])),
]
)
# angular axis
fig_artist.layout.polar.angularaxis.showline = False
fig_artist.layout.polar.angularaxis.showgrid = False
fig_artist.layout.polar.angularaxis.rotation = 9
fig_artist.layout.polar.angularaxis.showticklabels = False
fig_artist.layout.polar.angularaxis.ticklen = 0
# radial axis
fig_artist.layout.polar.radialaxis.showgrid = False
fig_artist.layout.polar.radialaxis.showline = True
fig_artist.layout.polar.radialaxis.ticklen = 10
fig_artist.layout.polar.radialaxis.tickformat = 's'
fig_artist.layout.polar.radialaxis.nticks = 3
fig_artist.layout.polar.radialaxis.tickfont.size = 28
fig_artist.layout.polar.radialaxis.tickangle = 0
fig_artist.layout.polar.radialaxis.range = [1, 101]
# hole
fig_artist.layout.polar.hole = 0.5
# size
fig_artist.layout.width = 1200
fig_artist.layout.height = 800
# legend
fig_artist.layout.legend.x = .5
fig_artist.layout.legend.y = .95
fig_artist.layout.legend.xanchor = 'center'
fig_artist.layout.legend.font.size = 24
fig_artist.layout.legend.orientation = 'h'
# details
fig_artist.layout.font.family = "Arial"
fig_artist
```
| github_jupyter |
## collections_email
```
import pandas as pd
import numpy as np
np.random.seed(24)
n = 5000
email = np.random.binomial(1, 0.5, n)
credit_limit = np.random.gamma(6, 200, n)
risk_score = np.random.beta(credit_limit, credit_limit.mean(), n)
opened = np.random.normal(5 + 0.001*credit_limit - 4*risk_score, 2)
opened = (opened > 4).astype(float) * email
agreement = np.random.normal(30 +(-0.003*credit_limit - 10*risk_score), 7) * 2 * opened
agreement = (agreement > 40).astype(float)
payments = (np.random.normal(500 + 0.16*credit_limit - 40*risk_score + 11*agreement + email, 75).astype(int) // 10) * 10
data = pd.DataFrame(dict(payments=payments,
email=email,
opened=opened,
agreement=agreement,
credit_limit=credit_limit,
risk_score=risk_score))
data.to_csv("collections_email.csv", index=False)
```
## hospital_treatment
```
import pandas as pd
import numpy as np
np.random.seed(24)
n = 80
hospital = np.random.binomial(1, 0.5, n)
treatment = np.where(hospital.astype(bool),
np.random.binomial(1, 0.9, n),
np.random.binomial(1, 0.1, n))
severity = np.where(hospital.astype(bool),
np.random.normal(20, 5, n),
np.random.normal(10, 5, n))
days = np.random.normal(15 + -5*treatment + 2*severity, 7).astype(int)
hospital = pd.DataFrame(dict(hospital=hospital,
treatment=treatment,
severity=severity,
days=days))
hospital.to_csv("hospital_treatment.csv", index=False)
```
## app engagement push
```
import pandas as pd
import numpy as np
np.random.seed(24)
n = 10000
push_assigned = np.random.binomial(1, 0.5, n)
income = np.random.gamma(6, 200, n)
push_delivered = np.random.normal(5 + 0.3+income, 500)
push_delivered = ((push_delivered > 800) & (push_assigned == 1)).astype(int)
in_app_purchase = (np.random.normal(100 + 20*push_delivered + 0.5*income, 75).astype(int) // 10)
data = pd.DataFrame(dict(in_app_purchase=in_app_purchase,
push_assigned=push_assigned,
push_delivered=push_delivered))
data.to_csv("app_engagement_push.csv", index=False)
```
## Drug Impact
```
import numpy as np
import pandas as pd
def make_confounded_data(N):
def get_severity(df):
return ((np.random.beta(1, 3, size=df.shape[0]) * (df["age"] < 30)) +
(np.random.beta(3, 1.5, size=df.shape[0]) * (df["age"] >= 30)))
def get_treatment(df):
return ((.33 * df["sex"] +
1.5 * df["severity"] + df["severity"] ** 2 +
0.15 * np.random.normal(size=df.shape[0])) > 1.5).astype(int)
def get_recovery(df):
return ((2 +
0.5 * df["sex"] +
0.03 * df["age"] + 0.03 * ((df["age"] * 0.1) ** 2) +
df["severity"] + np.log(df["severity"]) +
df["sex"] * df["severity"] -
df["medication"]) * 10).astype(int)
np.random.seed(1111)
sexes = np.random.randint(0, 2, size=N)
ages = np.random.gamma(8, scale=4, size=N)
meds = np.random.beta(1, 1, size=N)
# dados com designação aleatória
df_rnd = pd.DataFrame(dict(sex=sexes, age=ages, medication=meds))
df_rnd['severity'] = get_severity(df_rnd)
df_rnd['recovery'] = get_recovery(df_rnd)
features = ['sex', 'age', 'severity', 'medication', 'recovery']
df_rnd = df_rnd[features] # to enforce column order
# dados observacionais
df_obs = df_rnd.copy()
df_obs['medication'] = get_treatment(df_obs)
df_obs['recovery'] = get_recovery(df_obs)
# dados contrafactuais data
df_ctf = df_obs.copy()
df_ctf['medication'] = ((df_ctf['medication'] == 1) ^ 1).astype(float)
df_ctf['recovery'] = get_recovery(df_ctf)
return df_rnd, df_obs, df_ctf
np.random.seed(1234)
df_rnd, df_obs, df_ctf = make_confounded_data(20000)
df_obs.to_csv("medicine_impact_recovery.csv", index=False)
```
## Bilboard Mkt
```
import pandas as pd
import numpy as np
np.random.seed(123)
POAMay = np.random.gamma(7,10, 500) * np.random.binomial(1, .7, 500)
POAJul = np.random.gamma(7,15, 800) * np.random.binomial(1, .8, 800)
FLMay = np.random.gamma(10,20, 1300) * np.random.binomial(1, .85, 1300)
FLJul = np.random.gamma(11,21, 2000) * np.random.binomial(1, .9, 2000)
data = pd.concat([
pd.DataFrame(dict(deposits = POAMay.astype(int), poa=1, jul=0)),
pd.DataFrame(dict(deposits = POAJul.astype(int), poa=1, jul=1)),
pd.DataFrame(dict(deposits = FLMay.astype(int), poa=0, jul=0)),
pd.DataFrame(dict(deposits = FLJul.astype(int), poa=0, jul=1))
])
data.to_csv("billboard_impact.csv", index=False)
```
## Customer Lifecicle
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from toolz import merge
from sklearn.preprocessing import LabelEncoder
np.random.seed(12)
n = 10000
t = 30
age = 18 + np.random.poisson(10, n)
income = 500+np.random.exponential(2000, size=n).astype(int)
region = np.random.choice(np.random.lognormal(4, size=50), size=n)
freq = np.random.lognormal((1 + age/(18+10)).astype(int))
churn = np.random.poisson((income-500)/2000 + 22, n)
ones = np.ones((n, t))
alive = (np.cumsum(ones, axis=1) <= churn.reshape(n, 1)).astype(int)
buy = np.random.binomial(1, ((1/(freq+1)).reshape(n, 1) * ones))
cacq = -1*abs(np.random.normal(region, 2, size=n).astype(int))
transactions = np.random.lognormal(((income.mean() - 500) / 1000), size=(n, t)).astype(int) * buy * alive
data = pd.DataFrame(merge({"customer_id": range(n), "cacq":cacq},
{f"day_{day}": trans
for day, trans in enumerate(transactions.T)}))
encoced = {value:index for index, value in
enumerate(np.random.permutation(np.unique(region)))}
customer_features = pd.DataFrame(dict(customer_id=range(n),
region=region,
income=income,
age=age)).replace({"region":encoced}).astype(int)
print((data.drop(columns=["customer_id"]).sum(axis=1) > 0).mean()) # proportion of profitable customers
print((alive).mean(axis=0)) # alive customer per days
data.to_csv("./causal-inference-for-the-brave-and-true/data/customer_transactions.csv", index=False)
customer_features.to_csv("./causal-inference-for-the-brave-and-true/data/customer_features.csv", index=False)
```
## Prince and Sales
```
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
np.random.seed(5)
def price_elast(price, temp, weekday, cost):
return -4 + 0.2*price + 0.05*temp + 2*np.isin(weekday, [1,7]) + 0.3 * cost
def sales(price, temp, weekday, cost):
elast = -abs(price_elast(price, temp, weekday, cost))
output = np.random.normal(200 + 20*np.isin(weekday, [1,7]) + 1.3 * temp +
5*elast * price, 5).astype(int)
return output
n_rnd = 5000
temp = np.random.normal(24, 4, n_rnd).round(1)
weekday = np.random.choice(list(range(1, 8)), n_rnd)
cost = np.random.choice([0.3, 0.5, 1.0, 1.5], n_rnd)
price_rnd = np.random.choice(list(range(3, 11)), n_rnd)
price_df_rnd = pd.DataFrame(dict(temp=temp, weekday=weekday, cost=cost,
price=price_rnd, sales=sales(price_rnd, temp, weekday, cost)))
n = 10000
temp = np.random.normal(24, 4, n).round(1)
weekday = np.random.choice(list(range(1, 8)), n)
cost = np.random.choice([0.3, 0.5, 1.0, 1.5], n)
price = np.random.normal(5 + cost + np.isin(weekday, [1,7])).round(1)
price_df = pd.DataFrame(dict(temp=temp, weekday=weekday, cost=cost,
price=price, sales=sales(price, temp, weekday, cost)))
price_df_rnd.to_csv("./causal-inference-for-the-brave-and-true/data/ice_cream_sales_rnd.csv", index=False)
price_df.to_csv("./causal-inference-for-the-brave-and-true/data/ice_cream_sales.csv", index=False)
```
| github_jupyter |
# Evaluation of the new oversampler on the standard database foldings
In this notebook we give an example evaluating a new oversampler on the standard 104 imbalanced datasets. The evaluation is highly similar to that illustrated in the notebook ```002_evaluation_multiple_datasets``` with the difference that in this case some predefined dataset foldings are used to make the results comparable to those reported in the ranking page of the documentation. The database foldings need to be downloaded from the github repository and placed in the 'smote_foldings' directory.
```
import os, pickle, itertools
# import classifiers
from sklearn.calibration import CalibratedClassifierCV
from sklearn.svm import LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from smote_variants import MLPClassifierWrapper
# import SMOTE variants
import smote_variants as sv
# itertools to derive imbalanced databases
import imbalanced_databases as imbd
# setting global parameters
folding_path= os.path.join(os.path.expanduser('~'), 'smote_foldings')
if not os.path.exists(folding_path):
os.makedirs(folding_path)
max_sampler_parameter_combinations= 35
n_jobs= 5
# instantiate classifiers
# Support Vector Classifiers with 6 parameter combinations
sv_classifiers= [CalibratedClassifierCV(LinearSVC(C=1.0, penalty='l1', loss= 'squared_hinge', dual= False)),
CalibratedClassifierCV(LinearSVC(C=1.0, penalty='l2', loss= 'hinge', dual= True)),
CalibratedClassifierCV(LinearSVC(C=1.0, penalty='l2', loss= 'squared_hinge', dual= False)),
CalibratedClassifierCV(LinearSVC(C=10.0, penalty='l1', loss= 'squared_hinge', dual= False)),
CalibratedClassifierCV(LinearSVC(C=10.0, penalty='l2', loss= 'hinge', dual= True)),
CalibratedClassifierCV(LinearSVC(C=10.0, penalty='l2', loss= 'squared_hinge', dual= False))]
# Multilayer Perceptron Classifiers with 6 parameter combinations
mlp_classifiers= []
for x in itertools.product(['relu', 'logistic'], [1.0, 0.5, 0.1]):
mlp_classifiers.append(MLPClassifierWrapper(activation= x[0], hidden_layer_fraction= x[1]))
# Nearest Neighbor Classifiers with 18 parameter combinations
nn_classifiers= []
for x in itertools.product([3, 5, 7], ['uniform', 'distance'], [1, 2, 3]):
nn_classifiers.append(KNeighborsClassifier(n_neighbors= x[0], weights= x[1], p= x[2]))
# Decision Tree Classifiers with 6 parameter combinations
dt_classifiers= []
for x in itertools.product(['gini', 'entropy'], [None, 3, 5]):
dt_classifiers.append(DecisionTreeClassifier(criterion= x[0], max_depth= x[1]))
classifiers= []
classifiers.extend(sv_classifiers)
classifiers.extend(mlp_classifiers)
classifiers.extend(nn_classifiers)
classifiers.extend(dt_classifiers)
# querying datasets for the evaluation
datasets= imbd.get_data_loaders('study')
# executing the evaluation
results= sv.evaluate_oversamplers(datasets,
samplers= sv.get_all_oversamplers(),
classifiers= classifiers,
cache_path= folding_path,
n_jobs= n_jobs,
remove_sampling_cache= True,
max_samp_par_comb= max_sampler_parameter_combinations)
# The evaluation results are available in the results dataframe for further analysis.
print(results)
```
| github_jupyter |
```
import pandas as pd
#pd.set_option('display.max_rows', None)
all_votes_df = pd.read_json('all_votes_df.json')
#all_votes_df.head()
```
# Proposals DF
```
proposals_df = pd.read_json('proposals_df.json')
proposals_df.head()
import matplotlib.pyplot as plt
plt.scatter(proposals_df['scores_total'],proposals_df['percentage_for'])
#proposals_df.sort_values(by='votes',ascending=False)
proposals_df = proposals_df[(proposals_df['percentage_for'] != 1) | (proposals_df['percentage_for'] != 0)]
#proposals_df.head()
merged_df = pd.merge(all_votes_df,proposals_df,left_on='proposal',right_on='proposal_id',how='inner')
#merged_df['choice'].value_counts()
#merged_df.head()
merged_df['coalition_vp'] = merged_df.apply(lambda row: row.scores[row.choice-1],axis=1)
merged_df['coalition_contribution'] = merged_df['vp']/merged_df['coalition_vp']
merged_df['is_winning_coalition'] = merged_df['choice']==merged_df['result']
merged_df
import matplotlib.pyplot as plt
winning_coalition_df = merged_df[merged_df['is_winning_coalition']]
winning_coalition_df.sort_values(by='coalition_contribution',ascending=False).to_csv('jank.csv')
plt.hist(merged_df[merged_df['is_winning_coalition']]['coalition_contribution'], )#density=True)
plt.show()
```
# Kingmaker
### If you voted the other way, would the result change
```
def winning_threshold(choice,category):
if category == 'DG2':
if choice == 1:
return .6
else:
return .4
else:
return .5
def is_deciding_vote(row):
'''
Assuming that a vote is ALREADY on the winning side,
A vote is a considered a deciding vote if the following statement holds:
'If the vote had switched sides, then it would have still been on the winning side.'
'''
vp = row['vp']
choice = row['choice']
category = row['category']
scores_total = row['scores_total']
coalition_vp = row['coalition_vp']
reached_quorum = row['reached_quorum']
if not reached_quorum:
return "no quorum"
else:
#old_choice = choice
#old_my_team_score= coalition_vp
#old_enemy_team_score = scores_total-coalition_vp
new_choice = (choice) % 2 + 1 # 1 goes to 2, 2 goes to 1
new_coalition_vp = scores_total-(coalition_vp-vp)
new_percentage = new_coalition_vp/scores_total
new_wins = new_percentage > winning_threshold(new_choice,category)
return str(new_wins)
winning_coalition_df['is_deciding_vote'] = winning_coalition_df.apply(is_deciding_vote,axis=1)
deciding_vote_df = winning_coalition_df.sort_values(by='coalition_contribution',ascending=False).groupby(['voter','is_deciding_vote']).size()\
.reset_index(name='counts')\
.pivot_table(index='voter',columns='is_deciding_vote',values='counts',fill_value=0)\
.sort_values(by=True,ascending=False)
deciding_vote_df
```
# Proposals checking for quorum
```
proposals_df = pd.read_json('proposals_df.json')
proposals_df['created'] = pd.to_datetime(proposals_df['created']/1000,unit='s') # Convert from Unix time (in milliseconds) to datetime
proposals_df['reached_quorum'] = proposals_df['scores_total'] > 100000
proposals_df['reached_quorum'].value_counts()
winning_coalition_df
winning_coalition_df.groupby('voter').agg(['mean','min','max'])['vp']
winning_coalition_df.voter.value_counts()
```
| github_jupyter |
<h1 align="center">Data Augmentation for Deep Learning</h1>
This notebook illustrates the use of SimpleITK to perform data augmentation for deep learning. Note that the code is written so that the relevant functions work for both 2D and 3D images without modification.
Data augmentation is a model based approach for enlarging your training set. The problem being addressed is that the original dataset is not sufficiently representative of the general population of images. As a consequence, if we only train on the original dataset the resulting network will not generalize well to the population (overfitting).
Using a model of the variations found in the general population and the existing dataset we generate additional images in the hope of capturing the population variability. Note that if the model you use is incorrect you can cause harm, you are generating observations that do not occur in the general population and are optimizing a function to fit them.
```
import SimpleITK as sitk
import numpy as np
%matplotlib notebook
import gui
#utility method that either downloads data from the Girder repository or
#if already downloaded returns the file name for reading from disk (cached data)
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
OUTPUT_DIR = 'Output'
```
# Before we start, a word of caution
**Whenever you sample there is potential for aliasing (Nyquist theorem).**
In many cases, data prepared for use with a deep learning network is resampled to a fixed size. When we perform data augmentation via spatial transformations we also perform resampling.
Admittedly, the example below is exaggerated to illustrate the point, but it serves as a reminder that you may want to consider smoothing your images prior to resampling.
The effects of aliasing also play a role in network performance stability:
A. Azulay, Y. Weiss, "Why do deep convolutional networks generalize so poorly to small image transformations?" [CoRR abs/1805.12177](https://arxiv.org/abs/1805.12177), 2018.
```
# The image we will resample (a grid).
grid_image = sitk.GridSource(outputPixelType=sitk.sitkUInt16, size=(512,512),
sigma=(0.1,0.1), gridSpacing=(20.0,20.0))
sitk.Show(grid_image, "original grid image")
# The spatial definition of the images we want to use in a deep learning framework (smaller than the original).
new_size = [100, 100]
reference_image = sitk.Image(new_size, grid_image.GetPixelIDValue())
reference_image.SetOrigin(grid_image.GetOrigin())
reference_image.SetDirection(grid_image.GetDirection())
reference_image.SetSpacing([sz*spc/nsz for nsz,sz,spc in zip(new_size, grid_image.GetSize(), grid_image.GetSpacing())])
# Resample without any smoothing.
sitk.Show(sitk.Resample(grid_image, reference_image) , "resampled without smoothing")
# Resample after Gaussian smoothing.
sitk.Show(sitk.Resample(sitk.SmoothingRecursiveGaussian(grid_image, 2.0), reference_image), "resampled with smoothing")
```
# Load data
Load the images. You can work through the notebook using either the original 3D images or 2D slices from the original volumes. To do the latter, just uncomment the line in the cell below.
```
data = [sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd")),
sitk.ReadImage(fdata("vm_head_mri.mha")),
sitk.ReadImage(fdata("head_mr_oriented.mha"))]
# Comment out the following line if you want to work in 3D. Note that in 3D some of the notebook visualizations are
# disabled.
data = [data[0][:,160,:], data[1][:,:,17], data[2][:,:,0]]
def disp_images(images, fig_size, wl_list=None):
if images[0].GetDimension()==2:
gui.multi_image_display2D(image_list=images, figure_size=fig_size, window_level_list=wl_list)
else:
gui.MultiImageDisplay(image_list=images, figure_size=fig_size, window_level_list=wl_list)
disp_images(data, fig_size=(6,2))
```
The original data often needs to be modified. In this example we would like to crop the images so that we only keep the informative regions. We can readily separate the foreground and background using an appropriate threshold, in our case we use Otsu's threshold selection method.
```
def threshold_based_crop(image):
"""
Use Otsu's threshold estimator to separate background and foreground. In medical imaging the background is
usually air. Then crop the image using the foreground's axis aligned bounding box.
Args:
image (SimpleITK image): An image where the anatomy and background intensities form a bi-modal distribution
(the assumption underlying Otsu's method.)
Return:
Cropped image based on foreground's axis aligned bounding box.
"""
# Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are
# set to outside_value. The anatomy has higher intensity values than the background, so it is outside.
inside_value = 0
outside_value = 255
label_shape_filter = sitk.LabelShapeStatisticsImageFilter()
label_shape_filter.Execute( sitk.OtsuThreshold(image, inside_value, outside_value) )
bounding_box = label_shape_filter.GetBoundingBox(outside_value)
# The bounding box's first "dim" entries are the starting index and last "dim" entries the size
return sitk.RegionOfInterest(image, bounding_box[int(len(bounding_box)/2):], bounding_box[0:int(len(bounding_box)/2)])
modified_data = [threshold_based_crop(img) for img in data]
disp_images(modified_data, fig_size=(6,2))
```
At this point we select the images we want to work with, skip the following cell if you want to work with the original data.
```
data = modified_data
```
# Augmentation using spatial transformations
We next illustrate the generation of images by specifying a list of transformation parameter values representing a sampling of the transformation's parameter space.
The code below is agnostic to the specific transformation and it is up to the user to specify a valid list of transformation parameters (correct number of parameters and correct order). To learn more about the spatial transformations supported by SimpleITK you can explore the [Transforms notebook](22_Transforms.ipynb).
In most cases we can easily specify a regular grid in parameter space by specifying ranges of values for each of the parameters. In some cases specifying parameter values may be less intuitive (i.e. versor representation of rotation).
## Utility methods
Utilities for sampling a parameter space using a regular grid in a convenient manner (special care for 3D similarity).
```
def parameter_space_regular_grid_sampling(*transformation_parameters):
'''
Create a list representing a regular sampling of the parameter space.
Args:
*transformation_paramters : two or more numpy ndarrays representing parameter values. The order
of the arrays should match the ordering of the SimpleITK transformation
parametrization (e.g. Similarity2DTransform: scaling, rotation, tx, ty)
Return:
List of lists representing the regular grid sampling.
Examples:
#parametrization for 2D translation transform (tx,ty): [[1.0,1.0], [1.5,1.0], [2.0,1.0]]
>>>> parameter_space_regular_grid_sampling(np.linspace(1.0,2.0,3), np.linspace(1.0,1.0,1))
'''
return [[np.asscalar(p) for p in parameter_values]
for parameter_values in np.nditer(np.meshgrid(*transformation_parameters))]
def similarity3D_parameter_space_regular_sampling(thetaX, thetaY, thetaZ, tx, ty, tz, scale):
'''
Create a list representing a regular sampling of the 3D similarity transformation parameter space. As the
SimpleITK rotation parametrization uses the vector portion of a versor we don't have an
intuitive way of specifying rotations. We therefor use the ZYX Euler angle parametrization and convert to
versor.
Args:
thetaX, thetaY, thetaZ: numpy ndarrays with the Euler angle values to use, in radians.
tx, ty, tz: numpy ndarrays with the translation values to use in mm.
scale: numpy array with the scale values to use.
Return:
List of lists representing the parameter space sampling (vx,vy,vz,tx,ty,tz,s).
'''
return [list(eul2quat(parameter_values[0],parameter_values[1], parameter_values[2])) +
[np.asscalar(p) for p in parameter_values[3:]] for parameter_values in np.nditer(np.meshgrid(thetaX, thetaY, thetaZ, tx, ty, tz, scale))]
def similarity3D_parameter_space_random_sampling(thetaX, thetaY, thetaZ, tx, ty, tz, scale, n):
'''
Create a list representing a random (uniform) sampling of the 3D similarity transformation parameter space. As the
SimpleITK rotation parametrization uses the vector portion of a versor we don't have an
intuitive way of specifying rotations. We therefor use the ZYX Euler angle parametrization and convert to
versor.
Args:
thetaX, thetaY, thetaZ: Ranges of Euler angle values to use, in radians.
tx, ty, tz: Ranges of translation values to use in mm.
scale: Range of scale values to use.
n: Number of samples.
Return:
List of lists representing the parameter space sampling (vx,vy,vz,tx,ty,tz,s).
'''
theta_x_vals = (thetaX[1]-thetaX[0])*np.random.random(n) + thetaX[0]
theta_y_vals = (thetaY[1]-thetaY[0])*np.random.random(n) + thetaY[0]
theta_z_vals = (thetaZ[1]-thetaZ[0])*np.random.random(n) + thetaZ[0]
tx_vals = (tx[1]-tx[0])*np.random.random(n) + tx[0]
ty_vals = (ty[1]-ty[0])*np.random.random(n) + ty[0]
tz_vals = (tz[1]-tz[0])*np.random.random(n) + tz[0]
s_vals = (scale[1]-scale[0])*np.random.random(n) + scale[0]
res = list(zip(theta_x_vals, theta_y_vals, theta_z_vals, tx_vals, ty_vals, tz_vals, s_vals))
return [list(eul2quat(*(p[0:3]))) + list(p[3:7]) for p in res]
def eul2quat(ax, ay, az, atol=1e-8):
'''
Translate between Euler angle (ZYX) order and quaternion representation of a rotation.
Args:
ax: X rotation angle in radians.
ay: Y rotation angle in radians.
az: Z rotation angle in radians.
atol: tolerance used for stable quaternion computation (qs==0 within this tolerance).
Return:
Numpy array with three entries representing the vectorial component of the quaternion.
'''
# Create rotation matrix using ZYX Euler angles and then compute quaternion using entries.
cx = np.cos(ax)
cy = np.cos(ay)
cz = np.cos(az)
sx = np.sin(ax)
sy = np.sin(ay)
sz = np.sin(az)
r=np.zeros((3,3))
r[0,0] = cz*cy
r[0,1] = cz*sy*sx - sz*cx
r[0,2] = cz*sy*cx+sz*sx
r[1,0] = sz*cy
r[1,1] = sz*sy*sx + cz*cx
r[1,2] = sz*sy*cx - cz*sx
r[2,0] = -sy
r[2,1] = cy*sx
r[2,2] = cy*cx
# Compute quaternion:
qs = 0.5*np.sqrt(r[0,0] + r[1,1] + r[2,2] + 1)
qv = np.zeros(3)
# If the scalar component of the quaternion is close to zero, we
# compute the vector part using a numerically stable approach
if np.isclose(qs,0.0,atol):
i= np.argmax([r[0,0], r[1,1], r[2,2]])
j = (i+1)%3
k = (j+1)%3
w = np.sqrt(r[i,i] - r[j,j] - r[k,k] + 1)
qv[i] = 0.5*w
qv[j] = (r[i,j] + r[j,i])/(2*w)
qv[k] = (r[i,k] + r[k,i])/(2*w)
else:
denom = 4*qs
qv[0] = (r[2,1] - r[1,2])/denom;
qv[1] = (r[0,2] - r[2,0])/denom;
qv[2] = (r[1,0] - r[0,1])/denom;
return qv
```
## Create reference domain
All input images will be resampled onto the reference domain.
This domain is defined by two constraints: the number of pixels per dimension and the physical size we want the reference domain to occupy. The former is associated with the computational constraints of deep learning where using a small number of pixels is desired. The later is associated with the SimpleITK concept of an image, it occupies a region in physical space which should be large enough to encompass the object of interest.
```
dimension = data[0].GetDimension()
# Physical image size corresponds to the largest physical size in the training set, or any other arbitrary size.
reference_physical_size = np.zeros(dimension)
for img in data:
reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx else mx for sz,spc,mx in zip(img.GetSize(), img.GetSpacing(), reference_physical_size)]
# Create the reference image with a zero origin, identity direction cosine matrix and dimension
reference_origin = np.zeros(dimension)
reference_direction = np.identity(dimension).flatten()
# Select arbitrary number of pixels per dimension, smallest size that yields desired results
# or the required size of a pretrained network (e.g. VGG-16 224x224), transfer learning. This will
# often result in non-isotropic pixel spacing.
reference_size = [128]*dimension
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
# Another possibility is that you want isotropic pixels, then you can specify the image size for one of
# the axes and the others are determined by this choice. Below we choose to set the x axis to 128 and the
# spacing set accordingly.
# Uncomment the following lines to use this strategy.
#reference_size_x = 128
#reference_spacing = [reference_physical_size[0]/(reference_size_x-1)]*dimension
#reference_size = [int(phys_sz/(spc) + 1) for phys_sz,spc in zip(reference_physical_size, reference_spacing)]
reference_image = sitk.Image(reference_size, data[0].GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
# Always use the TransformContinuousIndexToPhysicalPoint to compute an indexed point's physical coordinates as
# this takes into account size, spacing and direction cosines. For the vast majority of images the direction
# cosines are the identity matrix, but when this isn't the case simply multiplying the central index by the
# spacing will not yield the correct coordinates resulting in a long debugging session.
reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
```
## Data generation
Once we have a reference domain we can augment the data using any of the SimpleITK global domain transformations. In this notebook we use a similarity transformation (the generate_images function is agnostic to this specific choice).
Note that you also need to create the labels for your augmented images. If these are just classes then your processing is minimal. If you are dealing with segmentation you will also need to transform the segmentation labels so that they match the transformed image. The following function easily accommodates for this, just provide the labeled image as input and use the sitk.sitkNearestNeighbor interpolator so that you do not introduce labels that were not in the original segmentation.
```
def augment_images_spatial(original_image, reference_image, T0, T_aug, transformation_parameters,
output_prefix, output_suffix,
interpolator = sitk.sitkLinear, default_intensity_value = 0.0):
'''
Generate the resampled images based on the given transformations.
Args:
original_image (SimpleITK image): The image which we will resample and transform.
reference_image (SimpleITK image): The image onto which we will resample.
T0 (SimpleITK transform): Transformation which maps points from the reference image coordinate system
to the original_image coordinate system.
T_aug (SimpleITK transform): Map points from the reference_image coordinate system back onto itself using the
given transformation_parameters. The reason we use this transformation as a parameter
is to allow the user to set its center of rotation to something other than zero.
transformation_parameters (List of lists): parameter values which we use T_aug.SetParameters().
output_prefix (string): output file name prefix (file name: output_prefix_p1_p2_..pn_.output_suffix).
output_suffix (string): output file name suffix (file name: output_prefix_p1_p2_..pn_.output_suffix).
interpolator: One of the SimpleITK interpolators.
default_intensity_value: The value to return if a point is mapped outside the original_image domain.
'''
all_images = [] # Used only for display purposes in this notebook.
for current_parameters in transformation_parameters:
T_aug.SetParameters(current_parameters)
# Augmentation is done in the reference image space, so we first map the points from the reference image space
# back onto itself T_aug (e.g. rotate the reference image) and then we map to the original image space T0.
T_all = sitk.Transform(T0)
T_all.AddTransform(T_aug)
aug_image = sitk.Resample(original_image, reference_image, T_all,
interpolator, default_intensity_value)
sitk.WriteImage(aug_image, output_prefix + '_' +
'_'.join(str(param) for param in current_parameters) +'_.' + output_suffix)
all_images.append(aug_image) # Used only for display purposes in this notebook.
return all_images # Used only for display purposes in this notebook.
```
Before we can use the generate_images function we need to compute the transformation which will map points between the reference image and the current image as shown in the code cell below.
Note that it is very easy to generate large amounts of data using a regular grid sampling in the transformation parameter space (`similarity3D_parameter_space_regular_sampling`), the calls to np.linspace with $m$ parameters each having $n$ values results in $n^m$ images, so don't forget that these images are also saved to disk. **If you run the code below with regular grid sampling for 3D data you will generate 6561 volumes ($3^7$ parameter combinations times 3 volumes).**
By default, the cell below uses random uniform sampling in the transformation parameter space (`similarity3D_parameter_space_random_sampling`). If you want to try regular sampling, uncomment the commented out code.
```
aug_transform = sitk.Similarity2DTransform() if dimension==2 else sitk.Similarity3DTransform()
all_images = []
for index,img in enumerate(data):
# Transform which maps from the reference_image to the current img with the translation mapping the image
# origins to each other.
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(img.GetDirection())
transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)
# Modify the transformation to align the centers of the original and reference image instead of their origins.
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# Set the augmenting transform's center so that rotation is around the image center.
aug_transform.SetCenter(reference_center)
if dimension == 2:
# The parameters are scale (+-10%), rotation angle (+-10 degrees), x translation, y translation
transformation_parameters_list = parameter_space_regular_grid_sampling(np.linspace(0.9,1.1,3),
np.linspace(-np.pi/18.0,np.pi/18.0,3),
np.linspace(-10,10,3),
np.linspace(-10,10,3))
else:
transformation_parameters_list = similarity3D_parameter_space_random_sampling(thetaX=(-np.pi/18.0,np.pi/18.0),
thetaY=(-np.pi/18.0,np.pi/18.0),
thetaZ=(-np.pi/18.0,np.pi/18.0),
tx=(-10.0, 10.0),
ty=(-10.0, 10.0),
tz=(-10.0, 10.0),
scale=(0.9,1.1),
n=10)
# transformation_parameters_list = similarity3D_parameter_space_regular_sampling(np.linspace(-np.pi/18.0,np.pi/18.0,3),
# np.linspace(-np.pi/18.0,np.pi/18.0,3),
# np.linspace(-np.pi/18.0,np.pi/18.0,3),
# np.linspace(-10,10,3),
# np.linspace(-10,10,3),
# np.linspace(-10,10,3),
# np.linspace(0.9,1.1,3))
generated_images = augment_images_spatial(img, reference_image, centered_transform,
aug_transform, transformation_parameters_list,
os.path.join(OUTPUT_DIR, 'spatial_aug'+str(index)), 'mha')
if dimension==2: # in 2D we join all of the images into a 3D volume which we use for display.
all_images.append(sitk.JoinSeries(generated_images))
# If working in 2D, display the resulting set of images.
if dimension==2:
gui.MultiImageDisplay(image_list=all_images, shared_slider=True, figure_size=(6,2))
```
## What about flipping
Reflection using SimpleITK can be done in one of several ways:
1. Use an affine transform with the matrix component set to a reflection matrix. The columns of the matrix correspond to the $\mathbf{x}, \mathbf{y}$ and $\mathbf{z}$ axes. The reflection matrix is constructed using the plane, 3D, or axis, 2D, which we want to reflect through with the standard basis vectors, $\mathbf{e}_i, \mathbf{e}_j$, and the remaining basis vector set to $-\mathbf{e}_k$.
* Reflection about $xy$ plane: $[\mathbf{e}_1, \mathbf{e}_2, -\mathbf{e}_3]$.
* Reflection about $xz$ plane: $[\mathbf{e}_1, -\mathbf{e}_2, \mathbf{e}_3]$.
* Reflection about $yz$ plane: $[-\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3]$.
2. Use the native slicing operator(e.g. img[:,::-1,:]), or the FlipImageFilter after the image is resampled onto the reference image grid.
We prefer option 1 as it is computationally more efficient. It combines all transformation prior to resampling, while the other approach performs resampling onto the reference image grid followed by the reflection operation. An additional difference is that using slicing or the FlipImageFilter will also modify the image origin while the resampling approach keeps the spatial location of the reference image origin intact. This minor difference is of no concern in deep learning as the content of the images is the same, but in SimpleITK two images are considered equivalent iff their content and spatial extent are the same.
The following two cells correspond to the two approaches:
```
%%timeit -n1 -r1
# Approach 1, using an affine transformation
flipped_images = []
for index,img in enumerate(data):
# Compute the transformation which maps between the reference and current image (same as done above).
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(img.GetDirection())
transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
flipped_transform = sitk.AffineTransform(dimension)
flipped_transform.SetCenter(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
if dimension==2: # matrices in SimpleITK specified in row major order
flipped_transform.SetMatrix([1,0,0,-1])
else:
flipped_transform.SetMatrix([1,0,0,0,-1,0,0,0,1])
centered_transform.AddTransform(flipped_transform)
# Resample onto the reference image
flipped_images.append(sitk.Resample(img, reference_image, centered_transform, sitk.sitkLinear, 0.0))
# Uncomment the following line to display the images (we don't want to time this)
#disp_images(flipped_images, fig_size=(6,2))
%%timeit -n1 -r1
# Approach 2, flipping after resampling
flipped_images = []
for index,img in enumerate(data):
# Compute the transformation which maps between the reference and current image (same as done above).
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(img.GetDirection())
transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# Resample onto the reference image
resampled_img = sitk.Resample(img, reference_image, centered_transform, sitk.sitkLinear, 0.0)
# We flip on the y axis (x, z are done similarly)
if dimension==2:
flipped_images.append(resampled_img[:,::-1])
else:
flipped_images.append(resampled_img[:,::-1,:])
# Uncomment the following line to display the images (we don't want to time this)
#disp_images(flipped_images, fig_size=(6,2))
```
## Radial Distortion
Some 2D medical imaging modalities, such as endoscopic video and X-ray images acquired with C-arms using image intensifiers, exhibit radial distortion. The common model for such distortion was described by Brown ["Close-range camera calibration", Photogrammetric Engineering, 37(8):855–866, 1971]:
$$
\mathbf{p}_u = \mathbf{p}_d + (\mathbf{p}_d-\mathbf{p}_c)(k_1r^2 + k_2r^4 + k_3r^6 + \ldots)
$$
where:
* $\mathbf{p}_u$ is a point in the undistorted image
* $\mathbf{p}_d$ is a point in the distorted image
* $\mathbf{p}_c$ is the center of distortion
* $r = \|\mathbf{p}_d-\mathbf{p}_c\|$
* $k_i$ are coefficients of the radial distortion
Using SimpleITK operators we represent this transformation using a deformation field as follows:
```
def radial_distort(image, k1, k2, k3, distortion_center=None):
c = distortion_center
if not c: # The default distortion center coincides with the image center
c = np.array(image.TransformContinuousIndexToPhysicalPoint(np.array(image.GetSize())/2.0))
# Compute the vector image (p_d - p_c)
delta_image = sitk.PhysicalPointSource( sitk.sitkVectorFloat64, image.GetSize(), image.GetOrigin(), image.GetSpacing(), image.GetDirection())
delta_image_list = [sitk.VectorIndexSelectionCast(delta_image,i) - c[i] for i in range(len(c))]
# Compute the radial distortion expression
r2_image = sitk.NaryAdd([img**2 for img in delta_image_list])
r4_image = r2_image**2
r6_image = r2_image*r4_image
disp_image = k1*r2_image + k2*r4_image + k3*r6_image
displacement_image = sitk.Compose([disp_image*img for img in delta_image_list])
displacement_field_transform = sitk.DisplacementFieldTransform(displacement_image)
return sitk.Resample(image, image, displacement_field_transform)
k1 = 0.00001
k2 = 0.0000000000001
k3 = 0.0000000000001
original_image = data[0]
distorted_image = radial_distort(original_image, k1, k2, k3)
# Use a grid image to highlight the distortion.
grid_image = sitk.GridSource(outputPixelType=sitk.sitkUInt16, size=original_image.GetSize(),
sigma=[0.1]*dimension, gridSpacing=[20.0]*dimension)
grid_image.CopyInformation(original_image)
distorted_grid = radial_distort(grid_image, k1, k2, k3)
disp_images([original_image, distorted_image, distorted_grid], fig_size=(6,2))
```
### Transferring deformations - exercise for the interested reader
Using SimpleITK we can readily transfer deformations from a spatio-temporal data set to another spatial data set to simulate temporal behavior. Case in point, using a 4D (3D+time) CT of the thorax we can estimate the respiratory motion using non-rigid registration and [Free Form Deformation](65_Registration_FFD.ipynb) or [displacement field](66_Registration_Demons.ipynb) transformations. We can then register a new spatial data set to the original spatial CT (non-rigidly) followed by application of the temporal deformations.
Note that unlike the arbitrary spatial transformations we used for data-augmentation above this approach is more computationally expensive as it involves multiple non-rigid registrations. Also note that as the goal is to use the estimated transformations to create plausible deformations you may be able to relax the required registration accuracy.
# Augmentation using intensity modifications
SimpleITK has many filters that are potentially relevant for data augmentation via modification of intensities. For example:
* Image smoothing, always read the documentation carefully, similar filters use use different parametrization $\sigma$ vs. variance ($\sigma^2$):
* [Discrete Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1DiscreteGaussianImageFilter.html)
* [Recursive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1RecursiveGaussianImageFilter.html)
* [Smoothing Recursive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SmoothingRecursiveGaussianImageFilter.html)
* Edge preserving image smoothing:
* [Bilateral image filtering](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1BilateralImageFilter.html), edge preserving smoothing.
* [Median filtering](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1MedianImageFilter.html)
* Adding noise to your images:
* [Additive Gaussian](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1AdditiveGaussianNoiseImageFilter.html)
* [Salt and Pepper / Impulse](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SaltAndPepperNoiseImageFilter.html)
* [Shot/Poisson](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1ShotNoiseImageFilter.html)
* [Speckle/multiplicative](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1SpeckleNoiseImageFilter.html)
* [Adaptive Histogram Equalization](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1AdaptiveHistogramEqualizationImageFilter.html)
```
def augment_images_intensity(image_list, output_prefix, output_suffix):
'''
Generate intensity modified images from the originals.
Args:
image_list (iterable containing SimpleITK images): The images which we whose intensities we modify.
output_prefix (string): output file name prefix (file name: output_prefixi_FilterName.output_suffix).
output_suffix (string): output file name suffix (file name: output_prefixi_FilterName.output_suffix).
'''
# Create a list of intensity modifying filters, which we apply to the given images
filter_list = []
# Smoothing filters
filter_list.append(sitk.SmoothingRecursiveGaussianImageFilter())
filter_list[-1].SetSigma(2.0)
filter_list.append(sitk.DiscreteGaussianImageFilter())
filter_list[-1].SetVariance(4.0)
filter_list.append(sitk.BilateralImageFilter())
filter_list[-1].SetDomainSigma(4.0)
filter_list[-1].SetRangeSigma(8.0)
filter_list.append(sitk.MedianImageFilter())
filter_list[-1].SetRadius(8)
# Noise filters using default settings
# Filter control via SetMean, SetStandardDeviation.
filter_list.append(sitk.AdditiveGaussianNoiseImageFilter())
# Filter control via SetProbability
filter_list.append(sitk.SaltAndPepperNoiseImageFilter())
# Filter control via SetScale
filter_list.append(sitk.ShotNoiseImageFilter())
# Filter control via SetStandardDeviation
filter_list.append(sitk.SpeckleNoiseImageFilter())
filter_list.append(sitk.AdaptiveHistogramEqualizationImageFilter())
filter_list[-1].SetAlpha(1.0)
filter_list[-1].SetBeta(0.0)
filter_list.append(sitk.AdaptiveHistogramEqualizationImageFilter())
filter_list[-1].SetAlpha(0.0)
filter_list[-1].SetBeta(1.0)
aug_image_lists = [] # Used only for display purposes in this notebook.
for i,img in enumerate(image_list):
aug_image_lists.append([f.Execute(img) for f in filter_list])
for aug_image,f in zip(aug_image_lists[-1], filter_list):
sitk.WriteImage(aug_image, output_prefix + str(i) + '_' +
f.GetName() + '.' + output_suffix)
return aug_image_lists
```
Modify the intensities of the original images using the set of SimpleITK filters described above. If we are working with 2D images the results will be displayed inline.
```
intensity_augmented_images = augment_images_intensity(data, os.path.join(OUTPUT_DIR, 'intensity_aug'), 'mha')
# in 2D we join all of the images into a 3D volume which we use for display.
if dimension==2:
def list2_float_volume(image_list) :
return sitk.JoinSeries([sitk.Cast(img, sitk.sitkFloat32) for img in image_list])
all_images = [list2_float_volume(imgs) for imgs in intensity_augmented_images]
# Compute reasonable window-level values for display (just use the range of intensity values
# from the original data).
original_window_level = []
statistics_image_filter = sitk.StatisticsImageFilter()
for img in data:
statistics_image_filter.Execute(img)
max_intensity = statistics_image_filter.GetMaximum()
min_intensity = statistics_image_filter.GetMinimum()
original_window_level.append((max_intensity-min_intensity, (max_intensity+min_intensity)/2.0))
gui.MultiImageDisplay(image_list=all_images, shared_slider=True, figure_size=(6,2), window_level_list=original_window_level)
```
SimpleITK has a sigmoid filter that allows us to map intensities via this nonlinear function to our desired range. Unlike the standard sigmoid settings that are applied when used as an activation function, the sigmoid filter is not necessarily centered on zero and the minimum and maximum output values are not necessarily 0 and 1.
The filter itself is defined as:
$$f(I) = (max_{output} - min_{output}) \frac{1}{1+ e^{-\frac{I-\beta}{\alpha}}} + min_{output}$$
Where $\alpha$ is the curve steepness (the larger the $\alpha$ the steeper the slope, the smaller the $\alpha$ the closer we get to a linear mapping in the output range) and $\beta$ is the intensity value for the sigmoid midpoint.
```
def sigmoid_mapping(image, curve_steepness, output_min=0, output_max=1.0, intensity_midpoint=None):
'''
Map the image using a sigmoid function.
Args:
image (SimpleITK image): scalar input image.
curve_steepness: Control the sigmoid steepness, the larger the number the steeper the curve.
output_min: Minimum value for output image, default 0.0 .
output_max: Maximum value for output image, default 1.0 .
intensity_midpoint: intensity value defining the sigmoid midpoint (x coordinate), default is the
median image intensity.
Return:
SimpleITK image with float pixel type.
'''
if intensity_midpoint is None:
intensity_midpoint = np.median(sitk.GetArrayViewFromImage(image))
sig_filter = sitk.SigmoidImageFilter()
sig_filter.SetOutputMinimum(output_min)
sig_filter.SetOutputMaximum(output_max)
sig_filter.SetAlpha(1.0/curve_steepness)
sig_filter.SetBeta(float(intensity_midpoint))
return sig_filter.Execute(sitk.Cast(image, sitk.sitkFloat64))
# Change the order of magnitude of curve steepness [1.0,0.1,0.01] to see the effect of this parameter.
# Also change it from positive to negative.
disp_images([sigmoid_mapping(img, curve_steepness=0.01) for img in data], fig_size=(6,2))
```
While the sigmoid mapping visually appears to work as expected, it is always good to "trust but verify". In the next cell we create a 1D image and plot the resulting sigmoid mapped values, ensuring that what we expect is indeed what is happening. This also allows us to see the effects in a more controlled manner.
To see the effects of various settings combinations try:
Setting the `curve_steepness` to [1.0,0.1,0.01, -0.01, -0.1, -1.0]
Setting the `intensity_midpoint` to [-50, 0, 50].
```
import matplotlib.pyplot as plt
#Create a 1D image with values in [-100,100].
arr_x = np.array([list(range(-100,101))])
image1D = sitk.GetImageFromArray(arr_x)
plt.figure()
plt.plot(arr_x.ravel(),
sitk.GetArrayViewFromImage(sigmoid_mapping(image1D, curve_steepness=1.0, intensity_midpoint = 0)).ravel());
```
Histogram equalization, increasing the entropy, of images prior to using deep learning is a common preprocessing step. Unfortunately, ITK and consequentially SimpleITK do not have a histogram equalization filter.
The following cell illustrates this functionality for all integer scalar SimpleITK images (2D,3D).
```
def histogram_equalization(image,
min_target_range = None,
max_target_range = None,
use_target_range = True):
'''
Histogram equalization of scalar images whose single channel has an integer
type. The goal is to map the original intensities so that resulting
histogram is more uniform (increasing the image's entropy).
Args:
image (SimpleITK.Image): A SimpleITK scalar image whose pixel type
is an integer (sitkUInt8,sitkInt8...
sitkUInt64, sitkInt64).
min_target_range (scalar): Minimal value for the target range. If None
then use the minimal value for the scalar pixel
type (e.g. 0 for sitkUInt8).
max_target_range (scalar): Maximal value for the target range. If None
then use the maximal value for the scalar pixel
type (e.g. 255 for sitkUInt8).
use_target_range (bool): If true, the resulting image has values in the
target range, otherwise the resulting values
are in [0,1].
Returns:
SimpleITK.Image: A scalar image with the same pixel type as the input image
or a sitkFloat64 (depending on the use_target_range value).
'''
arr = sitk.GetArrayViewFromImage(image)
i_info = np.iinfo(arr.dtype)
if min_target_range is None:
min_target_range = i_info.min
else:
min_target_range = np.max([i_info.min, min_target_range])
if max_target_range is None:
max_target_range = i_info.max
else:
max_target_range = np.min([i_info.max, max_target_range])
min_val = arr.min()
number_of_bins = arr.max() - min_val + 1
# using ravel, not flatten, as it does not involve memory copy
hist = np.bincount((arr-min_val).ravel(), minlength=number_of_bins)
cdf = np.cumsum(hist)
cdf = (cdf - cdf[0]) / (cdf[-1] - cdf[0])
res = cdf[arr-min_val]
if use_target_range:
res = (min_target_range + res*(max_target_range-min_target_range)).astype(arr.dtype)
return sitk.GetImageFromArray(res)
#cast the images to int16 because data[0] is float32 and the histogram equalization only works
#on integer types.
disp_images([histogram_equalization(sitk.Cast(img,sitk.sitkInt16)) for img in data], fig_size=(6,2))
```
Finally, you can easily create intensity variations that are specific to your domain, such as the spatially varying multiplicative and additive transformation shown below.
```
def mult_and_add_intensity_fields(original_image):
'''
Modify the intensities using multiplicative and additive Gaussian bias fields.
'''
# Gaussian image with same meta-information as original (size, spacing, direction cosine)
# Sigma is half the image's physical size and mean is the center of the image.
g_mult = sitk.GaussianSource(original_image.GetPixelIDValue(),
original_image.GetSize(),
[(sz-1)*spc/2.0 for sz, spc in zip(original_image.GetSize(), original_image.GetSpacing())],
original_image.TransformContinuousIndexToPhysicalPoint(np.array(original_image.GetSize())/2.0),
255,
original_image.GetOrigin(),
original_image.GetSpacing(),
original_image.GetDirection())
# Gaussian image with same meta-information as original (size, spacing, direction cosine)
# Sigma is 1/8 the image's physical size and mean is at 1/16 of the size
g_add = sitk.GaussianSource(original_image.GetPixelIDValue(),
original_image.GetSize(),
[(sz-1)*spc/8.0 for sz, spc in zip(original_image.GetSize(), original_image.GetSpacing())],
original_image.TransformContinuousIndexToPhysicalPoint(np.array(original_image.GetSize())/16.0),
255,
original_image.GetOrigin(),
original_image.GetSpacing(),
original_image.GetDirection())
return g_mult*original_image+g_add
disp_images([mult_and_add_intensity_fields(img) for img in data], fig_size=(6,2))
```
| github_jupyter |
# Hyperparameter tuning
**Learning Objectives**
1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Vertex AI
## Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
**1. Manual**
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
**2. Grid Search**
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
**3. Random Search**
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
**4. Bayesian Optimization**
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization).
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
**Vertex AI HyperTune**
Vertex AI HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview#search_algorithms) Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
```
PROJECT = "<YOUR PROJECT>"
BUCKET = "<YOUR BUCKET>"
REGION = "<YOUR REGION>"
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
```
## Make code compatible with Vertex AI Training Service
In order to make our code compatible with Vertex AI Training Service we need to make the following changes:
1. Upload data to Google Cloud Storage
2. Move code into a trainer Python package
4. Submit training job with `gcloud` to train on Vertex AI
### Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
```
!gsutil ls gs://$BUCKET/taxifare/data
```
## Move code into python package
In the [previous lab](./1_training_at_scale.ipynb), we moved our code into a python package for training on Vertex AI. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory:
- `__init__.py`
- `model.py`
- `task.py`
```
!ls -la taxifare/trainer
```
To use hyperparameter tuning in your training job you must perform the following steps:
1. Specify the hyperparameter tuning configuration for your training job by including `parameters` in the `StudySpec` of your Hyperparameter Tuning Job.
2. Include the following code in your training application:
- Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial (we already exposed these parameters as command-line arguments in the earlier lab).
- Report your hyperparameter metrics during training. Note that while you could just report the metrics at the end of training, it is better to set up a callback, to take advantage or Early Stopping.
- Read in the environment variable `$AIP_MODEL_DIR`, set by Vertex AI and containing the trial number, as our `output-dir`. As the training code will be submitted several times in a parallelized fashion, it is safer to use this variable than trying to assemble a unique id within the trainer code.
### Modify model.py
```
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
# TODO 1
# Instantiate the HyperTune reporting object
hpt = hypertune.HyperTune()
# Reporting callback
# TODO 1
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_rmse',
metric_value=logs['val_rmse'],
global_step=epoch)
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
nnsize = [int(s) for s in hparams['nnsize'].split()]
eval_data_path = hparams['eval_data_path']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path,
save_weights_only=True,
verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb, HPTCallback()]
)
# Exporting the model with default serving function.
tf.saved_model.save(model, model_export_path)
return history
```
### Modify task.py
```
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
default="32 8"
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
default = os.getenv("AIP_MODEL_DIR")
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile taxifare/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='taxifare_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='Taxifare model training application.'
)
%%bash
cd taxifare
python ./setup.py sdist --formats=gztar
cd ..
%%bash
gsutil cp taxifare/dist/taxifare_trainer-0.1.tar.gz gs://${BUCKET}/taxifare/
```
### Create HyperparameterTuningJob
Create a StudySpec object to hold the hyperparameter tuning configuration for your training job, and add the StudySpec to your hyperparameter tuning job.
In your StudySpec `metric`, set the `metric_id` to a value representing your chosen metric.
```
%%bash
# Output directory and job name
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
BASE_OUTPUT_DIR=gs://${BUCKET}/taxifare_$TIMESTAMP
JOB_NAME=taxifare_$TIMESTAMP
echo ${BASE_OUTPUT_DIR} ${REGION} ${JOB_NAME}
# Vertex AI machines to use for training
PYTHON_PACKAGE_URI="gs://${BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
echo > ./config.yaml "displayName: $JOB_NAME
studySpec:
metrics:
- metricId: val_rmse
goal: MINIMIZE
parameters:
- parameterId: lr
doubleValueSpec:
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterId: nbuckets
integerValueSpec:
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterId: batch_size
discreteValueSpec:
values:
- 15
- 30
- 50
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
baseOutputDirectory:
outputUriPrefix: $BASE_OUTPUT_DIR
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
pythonPackageSpec:
args:
- --train_data_path=$TRAIN_DATA_PATH
- --eval_data_path=$EVAL_DATA_PATH
- --batch_size=$BATCH_SIZE
- --num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON
- --num_evals=$NUM_EVALS
- --nbuckets=$NBUCKETS
- --lr=$LR
- --nnsize=$NNSIZE
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris:
- $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
replicaCount: $REPLICA_COUNT"
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
JOB_NAME=taxifare_$TIMESTAMP
echo $REGION
echo $JOB_NAME
gcloud beta ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=config.yaml \
--max-trial-count=10 \
--parallel-trial-count=2
```
You could have also used the Vertex AI Python SDK to achieve the same, as below:
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/fix_W1D3_intro/tutorials/W1D3_ModelFitting/W1D3_Intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Intro
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
## Overview
Today we will talk about how to confront models with experimental data. Does my model capture the behavior of my participant, or its neural activity? Does the data support my model against alternatives? Which component in the model is needed? Do parameters in the model vary systematically between two subject populations? We will cover the basic concepts and tools to address these questions, starting with a general overview in the intro. You will learn how to estimate the parameters of simple regression models from the data in Tutorials 1 & 2, and then how to estimate the uncertainty about these values in Tutorial 3. Then you will learn how to select from models of different complexity the one that best accounts for your data (Tutorial 4-6). The outro illustrates some of these techniques in real research examples.
Fitting and comparing models to data is really the bread of butter of data analysis in neuroscience. You have learned during Model Types Day (W1D1) about a whole zoo of different types of models that we’re interested in in neuroscience. So here you will learn about generic concepts and techniques that apply for fitting and comparing any type of models, which is arguably pretty useful! You will apply these tools again when dealing with GLMs (W1D4), latent models (W1D5), deep networks (W2D1), dynamical models (W2D2), decision models, reinforcement learning models… it’s everywhere! On top of this, we will cover linear regression models - the typical regression model when the dependent variable is continuous (e.g. BOLD activity) – and use it throughout the day, to illustrate the concepts and methods we learn. In the GLM day, you will see how to generalize regression models when the dependent variable is binary (e.g. choices) or an integer (e.g. spike counts).
Almost all statistical and data analysis methods rely either explicitly or implicitly on fitting some model to the data. The concepts and tools you will learn today are crucial to be able to test your hypothesis about how behavior or neural activity is formed. A typical way this is done is that you formulate a computational (stochastic) model that embodies your hypothesis, and then one or more control models. Then you fit each or your models to your experimental data, say the pattern of choices of one experimental subject, or the spiking activity from one recorded neuron. Simulating your fitted models allows validating that your model indeed captures the effects of interest in your data. Then you use model comparison techniques to tell which one or your main or control model(s) provides a better description of the data. Also, you could assess whether some parameter in your model changes between experimental conditions or subject populations.
## Prerequisite knowledge
In the content and tutorials today, you will be using vectors and matrices (W0D3 T1/T2), probability distributions (W0D5 T1), and likelihoods (W0D5 T2). Please review this material if necessary!
## Video
```
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1BX4y1w7oc", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"9JfXKmVB6qc", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Slides
```
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/sqcvz/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorIndustri/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorIndustri/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorIndustri4_2"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_SektorIndustri"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Industri"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorIndustri/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
```
%matplotlib inline
```
Saving and Loading Models
=========================
**Author:** `Matthew Inkawhich <https://github.com/MatthewInkawhich>`_
This document provides solutions to a variety of use cases regarding the
saving and loading of PyTorch models. Feel free to read the whole
document, or just skip to the code you need for a desired use case.
When it comes to saving and loading models, there are three core
functions to be familiar with:
1) `torch.save <https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save>`__:
Saves a serialized object to disk. This function uses Python’s
`pickle <https://docs.python.org/3/library/pickle.html>`__ utility
for serialization. Models, tensors, and dictionaries of all kinds of
objects can be saved using this function.
2) `torch.load <https://pytorch.org/docs/stable/torch.html?highlight=torch%20load#torch.load>`__:
Uses `pickle <https://docs.python.org/3/library/pickle.html>`__\ ’s
unpickling facilities to deserialize pickled object files to memory.
This function also facilitates the device to load the data into (see
`Saving & Loading Model Across
Devices <#saving-loading-model-across-devices>`__).
3) `torch.nn.Module.load_state_dict <https://pytorch.org/docs/stable/nn.html?highlight=load_state_dict#torch.nn.Module.load_state_dict>`__:
Loads a model’s parameter dictionary using a deserialized
*state_dict*. For more information on *state_dict*, see `What is a
state_dict? <#what-is-a-state-dict>`__.
**Contents:**
- `What is a state_dict? <#what-is-a-state-dict>`__
- `Saving & Loading Model for
Inference <#saving-loading-model-for-inference>`__
- `Saving & Loading a General
Checkpoint <#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training>`__
- `Saving Multiple Models in One
File <#saving-multiple-models-in-one-file>`__
- `Warmstarting Model Using Parameters from a Different
Model <#warmstarting-model-using-parameters-from-a-different-model>`__
- `Saving & Loading Model Across
Devices <#saving-loading-model-across-devices>`__
What is a ``state_dict``?
-------------------------
In PyTorch, the learnable parameters (i.e. weights and biases) of an
``torch.nn.Module`` model are contained in the model’s *parameters*
(accessed with ``model.parameters()``). A *state_dict* is simply a
Python dictionary object that maps each layer to its parameter tensor.
Note that only layers with learnable parameters (convolutional layers,
linear layers, etc.) have entries in the model’s *state_dict*. Optimizer
objects (``torch.optim``) also have a *state_dict*, which contains
information about the optimizer’s state, as well as the hyperparameters
used.
Because *state_dict* objects are Python dictionaries, they can be easily
saved, updated, altered, and restored, adding a great deal of modularity
to PyTorch models and optimizers.
Example:
^^^^^^^^
Let’s take a look at the *state_dict* from the simple model used in the
`Training a
classifier <https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py>`__
tutorial.
.. code:: python
# Define model
class TheModelClass(nn.Module):
def __init__(self):
super(TheModelClass, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# Initialize model
model = TheModelClass()
# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# Print model's state_dict
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
# Print optimizer's state_dict
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
print(var_name, "\t", optimizer.state_dict()[var_name])
**Output:**
::
Model's state_dict:
conv1.weight torch.Size([6, 3, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([16, 6, 5, 5])
conv2.bias torch.Size([16])
fc1.weight torch.Size([120, 400])
fc1.bias torch.Size([120])
fc2.weight torch.Size([84, 120])
fc2.bias torch.Size([84])
fc3.weight torch.Size([10, 84])
fc3.bias torch.Size([10])
Optimizer's state_dict:
state {}
param_groups [{'lr': 0.001, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0, 'nesterov': False, 'params': [4675713712, 4675713784, 4675714000, 4675714072, 4675714216, 4675714288, 4675714432, 4675714504, 4675714648, 4675714720]}]
Saving & Loading Model for Inference
------------------------------------
Save/Load ``state_dict`` (Recommended)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model.state_dict(), PATH)
**Load:**
.. code:: python
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
When saving a model for inference, it is only necessary to save the
trained model’s learned parameters. Saving the model’s *state_dict* with
the ``torch.save()`` function will give you the most flexibility for
restoring the model later, which is why it is the recommended method for
saving models.
A common PyTorch convention is to save models using either a ``.pt`` or
``.pth`` file extension.
Remember that you must call ``model.eval()`` to set dropout and batch
normalization layers to evaluation mode before running inference.
Failing to do this will yield inconsistent inference results.
.. Note ::
Notice that the ``load_state_dict()`` function takes a dictionary
object, NOT a path to a saved object. This means that you must
deserialize the saved *state_dict* before you pass it to the
``load_state_dict()`` function. For example, you CANNOT load using
``model.load_state_dict(PATH)``.
Save/Load Entire Model
^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model, PATH)
**Load:**
.. code:: python
# Model class must be defined somewhere
model = torch.load(PATH)
model.eval()
This save/load process uses the most intuitive syntax and involves the
least amount of code. Saving a model in this way will save the entire
module using Python’s
`pickle <https://docs.python.org/3/library/pickle.html>`__ module. The
disadvantage of this approach is that the serialized data is bound to
the specific classes and the exact directory structure used when the
model is saved. The reason for this is because pickle does not save the
model class itself. Rather, it saves a path to the file containing the
class, which is used during load time. Because of this, your code can
break in various ways when used in other projects or after refactors.
A common PyTorch convention is to save models using either a ``.pt`` or
``.pth`` file extension.
Remember that you must call ``model.eval()`` to set dropout and batch
normalization layers to evaluation mode before running inference.
Failing to do this will yield inconsistent inference results.
Saving & Loading a General Checkpoint for Inference and/or Resuming Training
----------------------------------------------------------------------------
Save:
^^^^^
.. code:: python
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
Load:
^^^^^
.. code:: python
model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
When saving a general checkpoint, to be used for either inference or
resuming training, you must save more than just the model’s
*state_dict*. It is important to also save the optimizer’s *state_dict*,
as this contains buffers and parameters that are updated as the model
trains. Other items that you may want to save are the epoch you left off
on, the latest recorded training loss, external ``torch.nn.Embedding``
layers, etc.
To save multiple components, organize them in a dictionary and use
``torch.save()`` to serialize the dictionary. A common PyTorch
convention is to save these checkpoints using the ``.tar`` file
extension.
To load the items, first initialize the model and optimizer, then load
the dictionary locally using ``torch.load()``. From here, you can easily
access the saved items by simply querying the dictionary as you would
expect.
Remember that you must call ``model.eval()`` to set dropout and batch
normalization layers to evaluation mode before running inference.
Failing to do this will yield inconsistent inference results. If you
wish to resuming training, call ``model.train()`` to ensure these layers
are in training mode.
Saving Multiple Models in One File
----------------------------------
Save:
^^^^^
.. code:: python
torch.save({
'modelA_state_dict': modelA.state_dict(),
'modelB_state_dict': modelB.state_dict(),
'optimizerA_state_dict': optimizerA.state_dict(),
'optimizerB_state_dict': optimizerB.state_dict(),
...
}, PATH)
Load:
^^^^^
.. code:: python
modelA = TheModelAClass(*args, **kwargs)
modelB = TheModelBClass(*args, **kwargs)
optimizerA = TheOptimizerAClass(*args, **kwargs)
optimizerB = TheOptimizerBClass(*args, **kwargs)
checkpoint = torch.load(PATH)
modelA.load_state_dict(checkpoint['modelA_state_dict'])
modelB.load_state_dict(checkpoint['modelB_state_dict'])
optimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])
optimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])
modelA.eval()
modelB.eval()
# - or -
modelA.train()
modelB.train()
When saving a model comprised of multiple ``torch.nn.Modules``, such as
a GAN, a sequence-to-sequence model, or an ensemble of models, you
follow the same approach as when you are saving a general checkpoint. In
other words, save a dictionary of each model’s *state_dict* and
corresponding optimizer. As mentioned before, you can save any other
items that may aid you in resuming training by simply appending them to
the dictionary.
A common PyTorch convention is to save these checkpoints using the
``.tar`` file extension.
To load the models, first initialize the models and optimizers, then
load the dictionary locally using ``torch.load()``. From here, you can
easily access the saved items by simply querying the dictionary as you
would expect.
Remember that you must call ``model.eval()`` to set dropout and batch
normalization layers to evaluation mode before running inference.
Failing to do this will yield inconsistent inference results. If you
wish to resuming training, call ``model.train()`` to set these layers to
training mode.
Warmstarting Model Using Parameters from a Different Model
----------------------------------------------------------
Save:
^^^^^
.. code:: python
torch.save(modelA.state_dict(), PATH)
Load:
^^^^^
.. code:: python
modelB = TheModelBClass(*args, **kwargs)
modelB.load_state_dict(torch.load(PATH), strict=False)
Partially loading a model or loading a partial model are common
scenarios when transfer learning or training a new complex model.
Leveraging trained parameters, even if only a few are usable, will help
to warmstart the training process and hopefully help your model converge
much faster than training from scratch.
Whether you are loading from a partial *state_dict*, which is missing
some keys, or loading a *state_dict* with more keys than the model that
you are loading into, you can set the ``strict`` argument to **False**
in the ``load_state_dict()`` function to ignore non-matching keys.
If you want to load parameters from one layer to another, but some keys
do not match, simply change the name of the parameter keys in the
*state_dict* that you are loading to match the keys in the model that
you are loading into.
Saving & Loading Model Across Devices
-------------------------------------
Save on GPU, Load on CPU
^^^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model.state_dict(), PATH)
**Load:**
.. code:: python
device = torch.device('cpu')
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=device))
When loading a model on a CPU that was trained with a GPU, pass
``torch.device('cpu')`` to the ``map_location`` argument in the
``torch.load()`` function. In this case, the storages underlying the
tensors are dynamically remapped to the CPU device using the
``map_location`` argument.
Save on GPU, Load on GPU
^^^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model.state_dict(), PATH)
**Load:**
.. code:: python
device = torch.device("cuda")
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.to(device)
# Make sure to call input = input.to(device) on any input tensors that you feed to the model
When loading a model on a GPU that was trained and saved on GPU, simply
convert the initialized ``model`` to a CUDA optimized model using
``model.to(torch.device('cuda'))``. Also, be sure to use the
``.to(torch.device('cuda'))`` function on all model inputs to prepare
the data for the model. Note that calling ``my_tensor.to(device)``
returns a new copy of ``my_tensor`` on GPU. It does NOT overwrite
``my_tensor``. Therefore, remember to manually overwrite tensors:
``my_tensor = my_tensor.to(torch.device('cuda'))``.
Save on CPU, Load on GPU
^^^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model.state_dict(), PATH)
**Load:**
.. code:: python
device = torch.device("cuda")
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location="cuda:0")) # Choose whatever GPU device number you want
model.to(device)
# Make sure to call input = input.to(device) on any input tensors that you feed to the model
When loading a model on a GPU that was trained and saved on CPU, set the
``map_location`` argument in the ``torch.load()`` function to
*cuda:device_id*. This loads the model to a given GPU device. Next, be
sure to call ``model.to(torch.device('cuda'))`` to convert the model’s
parameter tensors to CUDA tensors. Finally, be sure to use the
``.to(torch.device('cuda'))`` function on all model inputs to prepare
the data for the CUDA optimized model. Note that calling
``my_tensor.to(device)`` returns a new copy of ``my_tensor`` on GPU. It
does NOT overwrite ``my_tensor``. Therefore, remember to manually
overwrite tensors: ``my_tensor = my_tensor.to(torch.device('cuda'))``.
Saving ``torch.nn.DataParallel`` Models
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Save:**
.. code:: python
torch.save(model.module.state_dict(), PATH)
**Load:**
.. code:: python
# Load to whatever device you want
``torch.nn.DataParallel`` is a model wrapper that enables parallel GPU
utilization. To save a ``DataParallel`` model generically, save the
``model.module.state_dict()``. This way, you have the flexibility to
load the model any way you want to any device you want.
| github_jupyter |
Notebook prepared by Mathieu Blondel.
# Welcome
Welcome to the first practical work of the week! In this practical, we will learn about the programming language Python as well as NumPy and Matplotlib, two fundamental tools for data science and machine learning in Python.
# Notebooks
This week, we will use Jupyter notebooks and Google colab as the primary way to practice machine learning. Notebooks are a great way to mix executable code with rich contents (HTML, images, equations written in LaTeX). Colab allows to run notebooks on the cloud for free without any prior installation, while leveraging the power of [GPUs](https://en.wikipedia.org/wiki/Graphics_processing_unit).
The document that you are reading is not a static web page, but an interactive environment called a notebook, that lets you write and execute code. Notebooks consist of so-called code cells, blocks of one or more Python instructions. For example, here is a code cell that stores the result of a computation (the number of seconds in a day) in a variable and prints its value:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
Click on the "play" button to execute the cell. You should be able to see the result. Alternatively, you can also execute the cell by pressing Ctrl + Enter if you are on Windows / Linux or Command + Enter if you are on a Mac.
Variables that you defined in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
Note that the order of execution is important. For instance, if we do not run the cell storing *seconds_in_a_day* beforehand, the above cell will raise an error, as it depends on this variable. To make sure that you run all the cells in the correct order, you can also click on "Runtime" in the top-level menu, then "Run all".
**Exercise.** Add a cell below this cell: click on this cell then click on "+ Code". In the new cell, compute the number of seconds in a year by reusing the variable *seconds_in_a_day*. Run the new cell.
```
365 * seconds_in_a_day
```
# Python
Python is one of the most popular programming languages for machine learning, both in academia and in industry. As such, it is essential to learn this language for anyone interested in machine learning. In this section, we will review Python basics.
## Arithmetic operations
Python supports the usual arithmetic operators: + (addition), * (multiplication), / (division), ** (power), // (integer division).
## Lists
Lists are a container type for ordered sequences of elements. Lists can be initialized empty
```
my_list = []
```
or with some initial elements
```
my_list = [1, 2, 3]
```
Lists have a dynamic size and elements can be added (appended) to them
```
my_list.append(4)
my_list
```
We can access individual elements of a list (indexing starts from 0)
```
my_list[2]
```
We can access "slices" of a list using `my_list[i:j]` where `i` is the start of the slice (again, indexing starts from 0) and `j` the end of the slice. For instance:
```
my_list[1:3]
```
Omitting the second index means that the slice shoud run until the end of the list
```
my_list[1:]
```
We can check if an element is in the list using `in`
```
5 in my_list
```
The length of a list can be obtained using the `len` function
```
len(my_list)
```
## Strings
Strings are used to store text. They can delimited using either single quotes or double quotes
```
string1 = "some text"
string2 = 'some other text'
```
Strings behave similarly to lists. As such we can access individual elements in exactly the same way
```
string1[3]
```
and similarly for slices
```
string1[5:]
```
String concatenation is performed using the `+` operator
```
string1 + " " + string2
```
## Conditionals
As their name indicates, conditionals are a way to execute code depending on whether a condition is True or False. As in other languages, Python supports `if` and `else` but `else if` is contracted into `elif`, as the example below demonstrates.
```
my_variable = 5
if my_variable < 0:
print("negative")
elif my_variable == 0:
print("null")
else: # my_variable > 0
print("positive")
```
Here `<` and `>` are the strict `less` and `greater than` operators, while `==` is the equality operator (not to be confused with `=`, the variable assignment operator). The operators `<=` and `>=` can be used for less (resp. greater) than or equal comparisons.
Contrary to other languages, blocks of code are delimited using indentation. Here, we use 2-space indentation but many programmers also use 4-space indentation. Any one is fine as long as you are consistent throughout your code.
## Loops
Loops are a way to execute a block of code multiple times. There are two main types of loops: while loops and for loops.
While loop
```
i = 0
while i < len(my_list):
print(my_list[i])
i += 1 # equivalent to i = i + 1
```
For loop
```
for i in range(len(my_list)):
print(my_list[i])
```
If the goal is simply to iterate over a list, we can do so directly as follows
```
for element in my_list:
print(element)
```
## Functions
To improve code readability, it is common to separate the code into different blocks, responsible for performing precise actions: functions. A function takes some inputs and process them to return some outputs.
```
def square(x):
return x ** 2
def multiply(a, b):
return a * b
# Functions can be composed.
square(multiply(3, 2))
```
To improve code readability, it is sometimes useful to explicitly name the arguments
```
square(multiply(a=3, b=2))
```
## Exercises
**Exercise 1.** Using a conditional, write the [relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) function defined as follows
$\text{relu}(x) = \left\{
\begin{array}{rl}
x, & \text{if } x \ge 0 \\
0, & \text{otherwise }.
\end{array}\right.$
```
def relu(x):
if x >= 0:
return x
else:
return 0
relu(-3)
```
**Exercise 2.** Using a foor loop, write a function that computes the [Euclidean norm](https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) of a vector, represented as a list.
```
def euclidean_norm(vector):
s = 0
for i in range(len(vector)):
s += vector[i] ** 2
return np.sqrt(s)
import numpy as np
my_vector = [0.5, -1.2, 3.3, 4.5]
# The result should be roughly 5.729746940310715
euclidean_norm(my_vector)
```
**Exercise 3.** Using a for loop and a conditional, write a function that returns the maximum value in a vector.
```
def vector_maximum(vector):
max_val = -np.inf
for i in range(len(vector)):
max_val = max(max_val, vector[i])
return max_val
vector_maximum([3, -1.5, 2.3])
```
**Bonus exercise.** if time permits, write a function that sorts a list in ascending order (from smaller to bigger) using the [bubble sort](https://en.wikipedia.org/wiki/Bubble_sort) algorithm.
```
def bubble_sort(arr):
for i in range(len(arr) - 1):
# We subtract i because the last i elements are already sorted
for j in range(len(arr) - i - 1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr
my_array = np.array([4, 1, -3, 3, 2])
# We pass a copy, as bubble_sort modifies the array in place.
bubble_sort(my_array.copy())
```
## Going further
Clearly, it is impossible to cover all the language features in this short introduction. To go further, we recommend the following resources:
* List of Python [tutorials](https://wiki.python.org/moin/BeginnersGuide/Programmers)
* Four-hour [course](https://www.youtube.com/watch?v=rfscVS0vtbw) on Youtube
# NumPy
NumPy is a popular library for storing arrays of numbers and performing computations on them. Not only this enables to write often more succint code, this also makes the code faster, since most NumPy routines are implemented in C for speed.
To use NumPy in your program, you need to import it as follows
```
import numpy as np
```
## Array creation
NumPy arrays can be created from Python lists
```
my_array = np.array([1, 2, 3])
my_array
```
NumPy supports array of arbitrary dimension. For example, we can create two-dimensional arrays (e.g. to store a matrix) as follows
```
my_2d_array = np.array([[1, 2, 3], [4, 5, 6]])
my_2d_array
```
We can access individual elements of a 2d-array using two indices
```
my_2d_array[1, 2]
```
We can also access rows
```
my_2d_array[1]
```
and columns
```
my_2d_array[:, 2]
```
Arrays have a `shape` attribute
```
print(my_array.shape)
print(my_2d_array.shape)
```
Contrary to Python lists, NumPy arrays must have a type and all elements of the array must have the same type.
```
my_array.dtype
```
The main types are `int32` (32-bit integers), `int64` (64-bit integers), `float32` (32-bit real values) and `float64` (64-bit real values).
The `dtype` can be specified when creating the array
```
my_array = np.array([1, 2, 3], dtype=np.float64)
my_array.dtype
```
We can create arrays of all zeros using
```
zero_array = np.zeros((2, 3))
zero_array
```
and similarly for all ones using `ones` instead of `zeros`.
We can create a range of values using
```
np.arange(5)
```
or specifying the starting point
```
np.arange(3, 5)
```
Another useful routine is `linspace` for creating linearly spaced values in an interval. For instance, to create 10 values in `[0, 1]`, we can use
```
np.linspace(0, 1, 10)
```
Another important operation is `reshape`, for changing the shape of an array
```
my_array = np.array([1, 2, 3, 4, 5, 6])
my_array.reshape(3, 2)
```
Play with these operations and make sure you understand them well.
## Basic operations
In NumPy, we express computations directly over arrays. This makes the code much more succint.
Arithmetic operations can be performed directly over arrays. For instance, assuming two arrays have a compatible shape, we can add them as follows
```
array_a = np.array([1, 2, 3])
array_b = np.array([4, 5, 6])
array_a + array_b
```
Compare this with the equivalent computation using a for loop
```
array_out = np.zeros_like(array_a)
for i in range(len(array_a)):
array_out[i] = array_a[i] + array_b[i]
array_out
```
Not only this code is more verbose, it will also run much more slowly.
In NumPy, functions that operates on arrays in an element-wise fashion are called [universal functions](https://numpy.org/doc/stable/reference/ufuncs.html). For instance, this is the case of `np.sin`
```
np.sin(array_a)
```
Vector inner product can be performed using `np.dot`
```
np.dot(array_a, array_b)
```
When the two arguments to `np.dot` are both 2d arrays, `np.dot` becomes matrix multiplication
```
array_A = np.random.rand(5, 3)
array_B = np.random.randn(3, 4)
np.dot(array_A, array_B)
```
Matrix transpose can be done using `.transpose()` or `.T` for short
```
array_A.T
```
## Slicing and masking
Like Python lists, NumPy arrays support slicing
```
np.arange(10)[5:]
```
We can also select only certain elements from the array
```
x = np.arange(10)
mask = x >= 5
x[mask]
```
## Exercises
**Exercise 1.** Create a 3d array of shape (2, 2, 2), containing 8 values. Access individual elements and slices.
```
array1 = [[1, 2],
[3, 4]]
array2 = [[5, 6],
[7, 8]]
my_array = np.array([array1, array2])
my_array
my_array.shape
my_array[0, 1, 1]
my_array[1]
my_array[:, 1]
```
**Exercise 2.** Rewrite the relu function (see Python section) using [np.maximum](https://numpy.org/doc/stable/reference/generated/numpy.maximum.html). Check that it works on both a single value and on an array of values.
```
def relu_numpy(x):
return np.maximum(x, 0)
relu_numpy(np.array([1, -3, 2.5]))
```
**Exercise 3.** Rewrite the Euclidean norm of a vector (1d array) using NumPy (without for loop)
```
def euclidean_norm_numpy(x):
return np.sqrt(np.sum(x ** 2))
my_vector = np.array([0.5, -1.2, 3.3, 4.5])
euclidean_norm_numpy(my_vector)
```
**Exercise 4.** Write a function that computes the Euclidean norms of a matrix (2d array) in a row-wise fashion. Hint: use the `axis` argument of [np.sum](https://numpy.org/doc/stable/reference/generated/numpy.sum.html).
```
def euclidean_norm_2d(X):
return np.sqrt(np.sum(X ** 2, axis=1))
my_matrix = np.array([[0.5, -1.2, 4.5],
[-3.2, 1.9, 2.7]])
# Should return an array of size 2.
euclidean_norm_2d(my_matrix)
```
**Exercise 5.** Compute the mean value of the features in the [iris dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html). Hint: use the `axis` argument on [np.mean](https://numpy.org/doc/stable/reference/generated/numpy.mean.html).
```
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
# Result should be an array of size 4.
np.mean(X, axis=0)
```
## Going further
* NumPy [reference](https://numpy.org/doc/stable/reference/)
* SciPy [lectures](https://scipy-lectures.org/)
* One-hour [tutorial](https://www.youtube.com/watch?v=QUT1VHiLmmI) on Youtube
# Matplotlib
## Basic plots
Matplotlib is a plotting library for Python.
We start with a rudimentary plotting example.
```
from matplotlib import pyplot as plt
x_values = np.linspace(-3, 3, 100)
plt.figure()
plt.plot(x_values, np.sin(x_values), label="Sinusoid")
plt.xlabel("x")
plt.ylabel("sin(x)")
plt.title("Matplotlib example")
plt.legend(loc="upper left")
plt.show()
```
We continue with a rudimentary scatter plot example. This example displays samples from the [iris dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) using the first two features. Colors indicate class membership (there are 3 classes).
```
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
X_class0 = X[y == 0]
X_class1 = X[y == 1]
X_class2 = X[y == 2]
plt.figure()
plt.scatter(X_class0[:, 0], X_class0[:, 1], label="Class 0", color="C0")
plt.scatter(X_class1[:, 0], X_class1[:, 1], label="Class 1", color="C1")
plt.scatter(X_class2[:, 0], X_class2[:, 1], label="Class 2", color="C2")
plt.show()
```
We see that samples belonging to class 0 can be linearly separated from the rest using only the first two features.
## Exercises
**Exercise 1.** Plot the relu and the [softplus](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Softplus) functions on the same graph.
```
from matplotlib import pyplot as plt
def softplus(x):
return np.log(1 + np.exp(x))
x_values = np.linspace(-3, 3, 100)
plt.figure()
plt.plot(x_values, relu_numpy(x_values), label="Relu")
plt.plot(x_values, softplus(x_values), label="Softplus")
plt.xlabel("x")
plt.title("Comparison of relu and softplus")
plt.legend(loc="upper left")
plt.show()
```
What is the main difference between the two functions? **Answer:** softplus is a smooth approximation of relu.
**Exercise 2.** Repeat the same scatter plot but using the [digits dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) instead.
```
from sklearn.datasets import load_digits
X, y = load_digits(return_X_y=True)
plt.figure()
for i in range(10):
X_class_i = X[y == i]
plt.scatter(X_class_i[:, 0],
X_class_i[:, 1],
label="Class %d" %i,
color="C%d" % i)
plt.legend(loc="upper left")
plt.show()
```
Clearly, using only the first two features is insufficient to separate the classes.
## Going further
* Official [tutorial](https://matplotlib.org/tutorials/introductory/pyplot.html)
* [Tutorial](https://www.youtube.com/watch?v=qErBw-R2Ybk) on Youtube
| github_jupyter |
<p><font size="6"><b>07 - Pandas: Reshaping data</b></font></p>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
# Pivoting data
## Cfr. excel
People who know Excel, probably know the **Pivot** functionality:

The data of the table:
```
excelample = pd.DataFrame({'Month': ["January", "January", "January", "January",
"February", "February", "February", "February",
"March", "March", "March", "March"],
'Category': ["Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment"],
'Amount': [74., 235., 175., 100., 115., 240., 225., 125., 90., 260., 200., 120.]})
excelample
excelample_pivot = excelample.pivot(index="Category", columns="Month", values="Amount")
excelample_pivot
```
Interested in *Grand totals*?
```
# sum columns
excelample_pivot.sum(axis=1)
# sum rows
excelample_pivot.sum(axis=0)
```
## Pivot is just reordering your data:
Small subsample of the titanic dataset:
```
df = pd.DataFrame({'Fare': [7.25, 71.2833, 51.8625, 30.0708, 7.8542, 13.0],
'Pclass': [3, 1, 1, 2, 3, 2],
'Sex': ['male', 'female', 'male', 'female', 'female', 'male'],
'Survived': [0, 1, 0, 1, 0, 1]})
df
df.pivot(index='Pclass', columns='Sex', values='Fare')
df.pivot(index='Pclass', columns='Sex', values='Survived')
```
So far, so good...
Let's now use the full titanic dataset:
```
df = pd.read_csv("data/titanic.csv")
df.head()
```
And try the same pivot (*no worries about the try-except, this is here just used to catch a loooong error*):
```
try:
df.pivot(index='Sex', columns='Pclass', values='Fare')
except Exception as e:
print("Exception!", e)
```
This does not work, because we would end up with multiple values for one cell of the resulting frame, as the error says: `duplicated` values for the columns in the selection. As an example, consider the following rows of our three columns of interest:
```
df.loc[[1, 3], ["Sex", 'Pclass', 'Fare']]
```
Since `pivot` is just restructering data, where would both values of `Fare` for the same combination of `Sex` and `Pclass` need to go?
Well, they need to be combined, according to an `aggregation` functionality, which is supported by the function`pivot_table`
<div class="alert alert-danger">
<b>NOTE</b>:
<ul>
<li><b>Pivot</b> is purely restructering: a single value for each index/column combination is required.</li>
</ul>
</div>
## Pivot tables - aggregating while pivoting
```
df = pd.read_csv("data/titanic.csv")
df.pivot_table(index='Sex', columns='Pclass', values='Fare')
```
<div class="alert alert-info">
<b>REMEMBER</b>:
* By default, `pivot_table` takes the **mean** of all values that would end up into one cell. However, you can also specify other aggregation functions using the `aggfunc` keyword.
</div>
```
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='max')
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='count')
```
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>There is a shortcut function for a <code>pivot_table</code> with a <code>aggfunc='count'</code> as aggregation: <code>crosstab</code></li>
</ul>
</div>
```
pd.crosstab(index=df['Sex'], columns=df['Pclass'])
```
## Exercises
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a pivot table with the survival rates for Pclass vs Sex.</li>
</ul>
</div>
```
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
fig, ax1 = plt.subplots()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean').plot(kind='bar',
rot=0,
ax=ax1)
ax1.set_ylabel('Survival ratio')
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a table of the median Fare payed by aged/underaged vs Sex.</li>
</ul>
</div>
```
df['Underaged'] = df['Age'] <= 18
df.pivot_table(index='Underaged', columns='Sex',
values='Fare', aggfunc='median')
```
# Melt - from pivot table to long or tidy format
The `melt` function performs the inverse operation of a `pivot`.
```
pivoted = df.pivot_table(index='Sex', columns='Pclass', values='Fare').reset_index()
pivoted.columns.name = None
pivoted
```
Assume we have a DataFrame like the above. The observations (the average Fare people payed) are spread over different columns. To make sure each value is in its own row, we can use the `melt` function:
```
pd.melt(pivoted)
```
As you can see above, the `melt` function puts all column labels in one column, and all values in a second column.
In this case, this is not fully what we want. We would like to keep the 'Sex' column separately:
```
pd.melt(pivoted, id_vars=['Sex']) #, var_name='Pclass', value_name='Fare')
```
## Tidy data
`melt `can be used to make a dataframe longer, i.e. to make a *tidy* version of your data. In a [tidy dataset](https://vita.had.co.nz/papers/tidy-data.pdf) (also sometimes called 'long-form' data or 'denormalized' data) each observation is stored in its own row and each column contains a single variable:

Consider the following example with measurements in different Waste Water Treatment Plants (WWTP):
```
data = pd.DataFrame({
'WWTP': ['Destelbergen', 'Landegem', 'Dendermonde', 'Eeklo'],
'Treatment A': [8.0, 7.5, 8.3, 6.5],
'Treatment B': [6.3, 5.2, 6.2, 7.2]
})
data
```
This data representation is not "tidy":
- Each row contains two observations of pH (each from a different treatment)
- 'Treatment' (A or B) is a variable not in its own column, but used as column headers
We can `melt` the data set to tidy the data:
```
data_long = pd.melt(data, id_vars=["WWTP"],
value_name="pH", var_name="Treatment")
data_long
```
The usage of the tidy data representation has some important benefits when working with `groupby` or data visualization libraries such as Seaborn:
```
data_long.groupby("Treatment")["pH"].mean() # switch to `WWTP`
sns.catplot(data=data, x="WWTP", y="...", hue="...", kind="bar") # this doesn't work that easily
sns.catplot(data=data_long, x="WWTP", y="pH",
hue="Treatment", kind="bar") # switch `WWTP` and `Treatment`
```
# Reshaping with `stack` and `unstack`
The docs say:
> Pivot a level of the (possibly hierarchical) column labels, returning a
DataFrame (or Series in the case of an object with a single level of
column labels) having a hierarchical index with a new inner-most level
of row labels.
Indeed...
<img src="../img/pandas/schema-stack.svg" width=50%>
Before we speak about `hierarchical index`, first check it in practice on the following dummy example:
```
df = pd.DataFrame({'A':['one', 'one', 'two', 'two'],
'B':['a', 'b', 'a', 'b'],
'C':range(4)})
df
```
To use `stack`/`unstack`, we need the values we want to shift from rows to columns or the other way around as the index:
```
df = df.set_index(['A', 'B']) # Indeed, you can combine two indices
df
result = df['C'].unstack()
result
df = result.stack().reset_index(name='C')
df
```
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li><b>stack</b>: make your data <i>longer</i> and <i>smaller</i> </li>
<li><b>unstack</b>: make your data <i>shorter</i> and <i>wider</i> </li>
</ul>
</div>
## Mimick pivot table
To better understand and reason about pivot tables, we can express this method as a combination of more basic steps. In short, the pivot is a convenient way of expressing the combination of a `groupby` and `stack/unstack`.
```
df = pd.read_csv("data/titanic.csv")
df.head()
```
## Exercises
```
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above based on a combination of `groupby` and `unstack`</li>
<li>First use `groupby` to calculate the survival ratio for all groups`unstack`</li>
<li>Then, use `unstack` to reshape the output of the groupby operation</li>
</ul>
</div>
```
df.groupby(['Pclass', 'Sex'])['Survived'].mean().unstack()
```
# [OPTIONAL] Exercises: use the reshaping methods with the movie data
These exercises are based on the [PyCon tutorial of Brandon Rhodes](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so credit to him!) and the datasets he prepared for that. You can download these data from here: [`titles.csv`](https://course-python-data.s3.eu-central-1.amazonaws.com/titles.csv) and [`cast.csv`](https://course-python-data.s3.eu-central-1.amazonaws.com/cast.csv) and put them in the `/notebooks/data` folder.
```
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year over the whole period of available movie data.</li>
</ul>
</div>
```
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
table.plot()
cast.pivot_table(index='year', columns='type', values="character", aggfunc='count').plot()
# for values in using the , take a column with no Nan values in order to count effectively all values -> at this stage: aha-erlebnis about crosstab function(!)
pd.crosstab(index=cast['year'], columns=cast['type']).plot()
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year. Use kind='area' as plot type</li>
</ul>
</div>
```
pd.crosstab(index=cast['year'], columns=cast['type']).plot(kind='area')
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the fraction of roles that have been 'actor' roles each year over the whole period of available movie data.</li>
</ul>
</div>
```
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type').fillna(0)
(table['actor'] / (table['actor'] + table['actress'])).plot(ylim=[0, 1])
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Define a year as a "Superman year" when films of that year feature more Superman characters than Batman characters. How many years in film history have been Superman years?</li>
</ul>
</div>
```
c = cast
c = c[(c.character == 'Superman') | (c.character == 'Batman')]
c = c.groupby(['year', 'character']).size()
c = c.unstack()
c = c.fillna(0)
c.head()
d = c.Superman - c.Batman
print('Superman years:')
print(len(d[d > 0.0]))
```
| github_jupyter |
# Ray RLlib - Application Example - FrozenLake-v0
© 2019-2021, Anyscale. All Rights Reserved

This example uses [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) to train a policy with the `FrozenLake-v0` environment ([gym.openai.com/envs/FrozenLake-v0/](https://gym.openai.com/envs/FrozenLake-v0/)).
For more background about this problem, see:
* ["Introduction to Reinforcement Learning: the Frozen Lake Example"](https://reinforcementlearning4.fun/2019/06/09/introduction-reinforcement-learning-frozen-lake-example/), [Rodolfo Mendes](https://twitter.com/rodmsmendes)
* ["Gym Tutorial: The Frozen Lake"](https://reinforcementlearning4.fun/2019/06/16/gym-tutorial-frozen-lake/), [Rodolfo Mendes](https://twitter.com/rodmsmendes)
```
import pandas as pd
import json
import os
import shutil
import sys
import ray
import ray.rllib.agents.ppo as ppo
info = ray.init(ignore_reinit_error=True)
print("Dashboard URL: http://{}".format(info["webui_url"]))
```
Set up the checkpoint location:
```
checkpoint_root = "tmp/ppo/frozen-lake"
shutil.rmtree(checkpoint_root, ignore_errors=True, onerror=None)
```
Next we'll train an RLlib policy with the `FrozenLake-v0` environment.
By default, training runs for `10` iterations. Increase the `n_iter` setting if you want to see the resulting rewards improve.
Also note that *checkpoints* get saved after each iteration into the `/tmp/ppo/taxi` directory.
> **Note:** If you prefer to use a different directory root than `/tmp`, change it in the next cell **and** in the `rllib rollout` command below.
```
SELECT_ENV = "FrozenLake-v0"
N_ITER = 10
config = ppo.DEFAULT_CONFIG.copy()
config["log_level"] = "WARN"
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
results = []
episode_data = []
episode_json = []
for n in range(N_ITER):
result = agent.train()
results.append(result)
episode = {
"n": n,
"episode_reward_min": result["episode_reward_min"],
"episode_reward_mean": result["episode_reward_mean"],
"episode_reward_max": result["episode_reward_max"],
"episode_len_mean": result["episode_len_mean"],
}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
file_name = agent.save(checkpoint_root)
print(f'{n+1:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}, len mean: {result["episode_len_mean"]:8.4f}. Checkpoint saved to {file_name}')
import pprint
policy = agent.get_policy()
model = policy.model
pprint.pprint(model.variables())
pprint.pprint(model.value_function())
print(model.base_model.summary())
ray.shutdown()
```
## Rollout
Next we'll use the [`rollout` script](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) to evaluate the trained policy.
The following rollout visualizes the "character" agent operating within its simulation: trying to find a walkable path to a goal tile.
The [FrozenLake-v0 environment](https://gym.openai.com/envs/FrozenLake-v0/) documentation provides a detailed explanation of the text encoding. The *observation space* is defined as a 4x4 grid:
* `S` -- starting point, safe
* `F` -- frozen surface, safe
* `H` -- hole, fall to your doom
* `G` -- goal, where the frisbee is located
* orange rectangle shows where the character/agent is currently located
Note that for each action, the grid gets printed first followed by the action.
The *action space* is defined by four possible movements across the grid on the frozen lake:
* Up
* Down
* Left
* Right
The *rewards* at the end of each episode are structured as:
* 1 if you reach the goal
* 0 otherwise
An episode ends when you reach the goal or fall in a hole.
```
!rllib rollout \
tmp/ppo/frozen-lake/checkpoint_10/checkpoint-10 \
--config "{\"env\": \"FrozenLake-v0\"}" \
--run PPO \
--steps 2000
```
The rollout uses the second saved checkpoint, evaluated through `2000` steps.
Modify the path to view other checkpoints.
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Recommending movies: retrieval
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/basic_retrieval"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/basic_retrieval.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Real-world recommender systems are often composed of two stages:
1. The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.
2. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.
In this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our [ranking](basic_ranking) tutorial.
Retrieval models are often composed of two sub-models:
1. A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.
2. A candidate model computing the candidate representation (an equally-sized vector) using the candidate features
The outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.
In this tutorial, we're going to build and train such a two-tower model using the Movielens dataset.
We're going to:
1. Get our data and split it into a training and test set.
2. Implement a retrieval model.
3. Fit and evaluate it.
4. Export it for efficient serving by building an approximate nearest neighbours (ANN) index.
## The dataset
The Movielens dataset is a classic dataset from the [GroupLens](https://grouplens.org/datasets/movielens/) research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.
The data can be treated in two ways:
1. It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.
2. It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.
In this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example.
## Imports
Let's first get our imports out of the way.
```
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
```
## Preparing the dataset
Let's first have a look at the data.
We use the MovieLens dataset from [Tensorflow Datasets](https://www.tensorflow.org/datasets). Loading `movie_lens/100k_ratings` yields a `tf.data.Dataset` object containing the ratings data and loading `movie_lens/100k_movies` yields a `tf.data.Dataset` object containing only the movies data.
Note that since the MovieLens dataset does not have predefined splits, all data are under `train` split.
```
# Ratings data.
ratings = tfds.load("movie_lens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movie_lens/100k-movies", split="train")
```
The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:
```
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
```
The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.
```
for x in movies.take(1).as_numpy_iterator():
pprint.pprint(x)
```
In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality.
We keep only the `user_id`, and `movie_title` fields in the dataset.
```
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])
```
To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.
In this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.
```
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
```
Let's also figure out unique user ids and movie titles present in the data.
This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.
```
movie_titles = movies.batch(1_000)
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
unique_movie_titles[:10]
```
## Implementing a model
Choosing the architecure of our model a key part of modelling.
Because we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model.
### The query tower
Let's start with the query tower.
The first step is to decide on the dimensionality of the query and candidate representations:
```
embedding_dimension = 32
```
Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.
The second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an `Embedding` layer. Note that we use the list of unique user ids we computed earlier as a vocabulary:
```
user_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
```
A simple model like this corresponds exactly to a classic [matrix factorization](https://ieeexplore.ieee.org/abstract/document/4781121) approach. While defining a subclass of `tf.keras.Model` for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an `embedding_dimension`-wide output at the end.
### The candidate tower
We can do the same with the candidate tower.
```
movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
```
### Metrics
In our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.
To do this, we can use the `tfrs.metrics.FactorizedTopK` metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.
In our case, that's the `movies` dataset, converted into embeddings via our movie model:
```
metrics = tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(movie_model)
)
```
### Loss
The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.
In this instance, we'll make use of the `Retrieval` task object: a convenience wrapper that bundles together the loss function and metric computation:
```
task = tfrs.tasks.Retrieval(
metrics=metrics
)
```
The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop.
### The full model
We can now put it all together into a model. TFRS exposes a base model class (`tfrs.models.Model`) which streamlines bulding models: all we need to do is to set up the components in the `__init__` method, and implement the `compute_loss` method, taking in the raw features and returning a loss value.
The base model will then take care of creating the appropriate training loop to fit our model.
```
class MovielensModel(tfrs.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model,
# getting embeddings back.
positive_movie_embeddings = self.movie_model(features["movie_title"])
# The task computes the loss and the metrics.
return self.task(user_embeddings, positive_movie_embeddings)
```
The `tfrs.Model` base class is a simply convenience class: it allows us to compute both training and test losses using the same method.
Under the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from `tf.keras.Model` and overriding the `train_step` and `test_step` functions (see [the guide](https://keras.io/guides/customizing_what_happens_in_fit/) for details):
```
class NoBaseClassMovielensModel(tf.keras.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Set up a gradient tape to record gradients.
with tf.GradientTape() as tape:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
gradients = tape.gradient(total_loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
```
In these tutorials, however, we stick to using the `tfrs.Model` base class to keep our focus on modelling and abstract away some of the boilerplate.
## Fitting and evaluating
After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.
Let's first instantiate the model.
```
model = MovielensModel(user_model, movie_model)
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
```
Then shuffle, batch, and cache the training and evaluation data.
```
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
```
Then train the model:
```
model.fit(cached_train, epochs=3)
```
As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.
Note that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation.
Finally, we can evaluate our model on the test set:
```
model.evaluate(cached_test, return_dict=True)
```
Test set performance is much worse than training performance. This is due to two factors:
1. Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.
2. The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.
The second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item).
## Making predictions
Now that we have a model, we would like to be able to make predictions. We can use the `tfrs.layers.factorized_top_k.BruteForce` layer to do this.
```
# Create a model that takes in raw query features, and
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
# recommends movies out of the entire movies dataset.
index.index(movies.batch(100).map(model.movie_model), movies)
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
```
Of course, the `BruteForce` layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index.
## Model serving
After the model is trained, we need a way to deploy it.
In a two-tower retrieval model, serving has two components:
- a serving query model, taking in features of the query and transforming them into a query embedding, and
- a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model.
### Exporting a query model to serving
Exporting the query model is easy: we can either serialize the Keras model directly, or export it to a `SavedModel` format to make it possible to serve using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving).
To export to a `SavedModel` format, we can do the following:
```
# Export the query model.
with tempfile.TemporaryDirectory() as tmp:
path = os.path.join(tmp, "query_model")
model.user_model.save(path)
loaded = tf.keras.models.load_model(path)
query_embedding = loaded(tf.constant(["10"]))
print(f"Query embedding: {query_embedding[0, :3]}")
```
### Building a candidate ANN index
Exporting candidate representations is more involved. Firstly, we want to pre-compute them to make sure serving is fast; this is especially important if the candidate model is computationally intensive (for example, if it has many or wide layers; or uses complex representations for text or images). Secondly, we would like to take the precomputed representations and use them to construct a fast approximate retrieval index.
We can use [Annoy](https://github.com/spotify/annoy) to build such an index.
Annoy isn't included in the base TFRS package. To install it, run:
```
!pip install -q annoy
```
We can now create the index object.
```
from annoy import AnnoyIndex
index = AnnoyIndex(embedding_dimension, "dot")
```
Then take the candidate dataset and transform its raw features into embeddings using the movie model:
```
movie_embeddings = movies.enumerate().map(lambda idx, title: (idx, title, model.movie_model(title)))
```
And then index the movie_id, movie embedding pairs into our Annoy index:
```
movie_id_to_title = dict((idx, title) for idx, title, _ in movie_embeddings.as_numpy_iterator())
# We unbatch the dataset because Annoy accepts only scalar (id, embedding) pairs.
for movie_id, _, movie_embedding in movie_embeddings.as_numpy_iterator():
index.add_item(movie_id, movie_embedding)
# Build a 10-tree ANN index.
index.build(10)
```
We can then retrieve nearest neighbours:
```
for row in test.batch(1).take(3):
query_embedding = model.user_model(row["user_id"])[0]
candidates = index.get_nns_by_vector(query_embedding, 3)
print(f"Candidates: {[movie_id_to_title[x] for x in candidates]}.")
```
## Next steps
This concludes the retrieval tutorial.
To expand on what is presented here, have a look at:
1. Learning multi-task models: jointly optimizing for ratings and clicks.
2. Using movie metadata: building a more complex movie model to alleviate cold-start.
| github_jupyter |
## Plotting boundaires of HxC planes with different number of bins.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
lag = 512 # {32, 64, 128, 256, 512} Choose one of them
base = pd.read_pickle('./pkl_datasets/mamiraua_dataset_ACF_' + str(lag) + '.gzip')
plt.figure(figsize=(24,10))
plt.rc('font', size=22)
plt.rc('axes', titlesize=22)
plt.subplot(1,2,1)
lags = [32, 64, 128, 256, 512]
for lag in lags:
cotas = pd.read_csv('./boundary_files/Cotas_HxC_bins_' + str(int(lag)) + '.csv')
noise = pd.read_csv('./coloredNoises/coloredNoises_' + str(int(lag)) + '.csv')
if lag == 32:
plt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = 'HxC boundaries')
plt.plot(noise['Entropy'],noise['Complexity'], '--b', label = 'Colored noises')
else:
plt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = '')
plt.plot(noise['Entropy'],noise['Complexity'], '--b', label = '')
plt.text(0.7, 0.475, '512', fontsize= 18)
plt.text(0.7, 0.445, '256', fontsize= 18)
plt.text(0.7, 0.415, '128', fontsize= 18)
plt.text(0.7, 0.376, '64', fontsize= 18)
plt.text(0.7, 0.34, '32', fontsize= 18)
plt.text(0.58, 0.27, '512', fontsize= 16, color='blue', backgroundcolor='0.99')
plt.text(0.6, 0.254, '256', fontsize= 16, color='blue', backgroundcolor='0.99')
plt.text(0.62, 0.238, '128', fontsize= 16, color='blue', backgroundcolor='0.99')
plt.text(0.64, 0.223, '64', fontsize= 16, color='blue', backgroundcolor='0.99')
plt.text(0.66, 0.207, '32', fontsize= 16, color='blue', backgroundcolor='0.99')
plt.xlim([0, 1])
plt.ylim([0, np.max(cotas['Complexity'])+0.01])
plt.ylabel('Complexity [C]')
plt.xlabel('Entropy [H]')
plt.legend(loc = 'upper left', frameon=False)
plt.title('a)')
plt.text(0.66, 0.015, 'disorder', fontsize= 22)
plt.arrow(0.15, 0.01, 0.705, 0, head_width=0.015, head_length=0.015, linewidth=1, length_includes_head=True)
plt.text(0.2, 0.055, 'order', fontsize= 22)
plt.arrow(0.85, 0.05, -0.70, 0, head_width=0.015, head_length=0.015, linewidth=1, length_includes_head=True)
plt.subplot(1,2,2)
plt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = 'HxC boundaries')
plt.plot(noise['Entropy'],noise['Complexity'], '--b', label = 'Colored noises')
plt.xlim([0, 1])
plt.ylim([0, np.max(cotas['Complexity'])+0.01])
plt.ylabel('Complexity [C]')
plt.xlabel('Entropy [H]')
plt.legend(loc = 'upper left', frameon=False)
plt.scatter(base['H'], base['C'], marker='.', s=15, c=base['JSD'], norm=plt.Normalize(vmax=1, vmin=0),
cmap = 'tab20c')
plt.axvline(x=0.7, ymin=0, linewidth=1.2, color='r', ls='-.')
plt.axhline(y=.40, xmin=0, xmax=0.7, linewidth=1.2, color='r', ls='-.')
plt.axhline(y=.37, xmin=0, xmax=0.7, linewidth=1.2, color='r', ls='-.')
plt.axhline(y=.34, xmin=0, xmax=0.7, linewidth=1.2, color='r', ls='-.')
plt.plot(.7, .40, 'o', color='r', linewidth=1)
plt.annotate('$p_1$', xy=(.71, .40))
plt.plot(.7, .37, 'o', color='r', linewidth=1)
plt.annotate('$p_2$', xy=(.71, .37))
plt.plot(.7, .34, 'o', color='r', linewidth=1)
plt.annotate('$p_3$', xy=(.71, .34))
plt.title('b)')
# plt.savefig('./figures/Fig1.eps', format="eps", bbox_inches='tight')
plt.show()
```
| github_jupyter |
# Clustering and the k-means Algorithm
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Introduction
This notebook introduces [cluster analysis](https://en.wikipedia.org/wiki/Cluster_analysis) and one of the most common algorithms for it, [k-means](https://en.wikipedia.org/wiki/K-means_clustering).
It also introduces
* Jupyter, which is a tool for creating notebooks like this one;
* NumPy, which we'll use to perform array operations;
* Pandas, which we'll use to read and clean the data; and
* scikit-learn, which provides an implementation of k-means.
We'll proceed "top-down"; that is, we'll use scikit-learn first, then we'll open the hood and see how it works.
If you want to follow along:
* Use this link to run this notebook and do the exercises: [tinyurl.com/DowPen20](https://tinyurl.com/DowPen20)
* Use this link to run the same notebook with solutions: [tinyurl.com/DowPenSoln20](https://tinyurl.com/DowPenSoln20)
### Bio
I am a professor at [Olin College](http://www.olin.edu/), which is a small engineering school near Boston, Massachusetts, USA.
Olin was created in 1999 with the mission to transform engineering education.
```
%%html
<iframe src="https://www.google.com/maps/embed?pb=!1m14!1m8!1m3!1d1512.1750667940496!2d-71.26457056946273!3d42.29270982134376!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x0%3A0xa038229eeed8c35b!2sOlin%20College%20of%20Engineering!5e1!3m2!1sen!2sus!4v1594232142090!5m2!1sen!2sus" width="600" height="450" frameborder="0" style="border:0;" allowfullscreen="" aria-hidden="false" tabindex="0"></iframe>
```
## Classes and books
I have been at Olin since 2003. I teach classes related to software, data science, Bayesian statistics, and physical modeling.
I have written several books on these topics, including *Think Python* and *Think Stats*. Most are published by O'Reilly Media, which is famous for putting animals on their covers:
<img src="https://greenteapress.com/covers/think_python_cover_small.jpeg">
But all of them are freely available from [Green Tea Press](https://greenteapress.com/wp/).
Finally, I write a blog about Data Science and related topics, called [Probably Overthinking It](https://www.allendowney.com/blog/).
## Jupyter and Colab
Jupyter is a tool for writing notebooks that contain text, code, and results.
You can install Jupyter on your own computer, but can also use services like Colab that run the notebook for you.
In that case, you don't have to install anything; you just need a browser.
A notebook contains:
* Text cells, which contain text in Markdown or HTML, and
* Code cells, which contain code in Python or one of about 100 other languages.
This is a text cell; the one below is a code cell.
```
print('Hello, Jupyter')
```
On Colab, code cells have a triangle "Play" icon on the left side. You can press it to run the code in the cell.
Or if you select a cell by clicking on it, you can run it by pressing Shift-Enter.
As an exercise:
1. Run the `print` statement in the previous cell.
2. Modify the code in that cell and run it again.
3. Run the next cell, which imports the Python modules we'll use later.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
Also run the following cell, which defines a function we'll use.
```
def decorate(**options):
"""Decorate the current axes.
Call decorate with keyword arguments like
decorate(title='Title',
xlabel='x',
ylabel='y')
The keyword arguments can be any of the axis properties
https://matplotlib.org/api/axes_api.html
"""
ax = plt.gca()
ax.set(**options)
handles, labels = ax.get_legend_handles_labels()
if handles:
ax.legend(handles, labels)
plt.tight_layout()
```
## Clustering
Cluster analysis is a set of tools for looking at data and
* Discovering groups, species, or categories,
* Defining boundaries between groups.
It is a form of "unsupervised" learning, which means that the only input is the dataset itself; the algorithm is not given any correct examples to learn from.
As an example, I'll used data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica.
This dataset was published to support this article: Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014.
The following cell downloads the raw data.
```
# Load the data files from https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
import os
if not os.path.exists('penguins_raw.csv'):
!wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv
```
The dataset is stored in a CSV file, which contains one row for each penguin and one column for each variable.
I'll use Pandas to read the CSV file and put the results in a `DataFrame`.
```
df = pd.read_csv('penguins_raw.csv')
df.shape
```
A `DataFrame` is like a 2-D array, but it also contains names for the columns and labels for the rows.
The `shape` of the `DataFrame` is the number of rows and columns.
The `head` method displays the first few rows.
```
df.head()
```
Three species of penguins are represented in the dataset: Adelie, Chinstrap and Gentoo, as shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license):
<img width="400" src="https://pbs.twimg.com/media/EaAWkZ0U4AA1CQf?format=jpg&name=4096x4096">
In this dataset we are told that there are three species, and we are told which species each penguin belongs to.
But for purposes of clustering, we'll pretend we don't have this information and we'll see whether the algorithm "discovers" the different species.
The measurements we'll use are:
* Body Mass in grams (g).
* Flipper Length in millimeters (mm).
* Culmen Length in millimeters.
* Culmen Depth in millimeters.
If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurement#Culmen), as shown in the following illustration (also by Allison Horst):
<img width="400" src="https://pbs.twimg.com/media/EaAXQn8U4AAoKUj?format=jpg&name=4096x4096">
This might seem like an artificial exercise. If we already know that there are three species, why are we trying to discover them?
For now, I'll just say that it's a learning example. But let's come back to this question: what is unsupervised clustering good for?
## Distributions of measurements
The measurements we have will be most useful for clustering if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I will plot distributions of measurements for each species.
For convenience, I'll create a new column, called `Species2`, that contains a shorter version of the species names.
```
def shorten(species):
"""Select the first word from a string."""
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
```
I'll use the `groupby` method to divide the dataset by species.
```
grouped = df.groupby('Species2')
type(grouped)
```
The result is a `GroupBy` object that contains the three groups and their names. The following loop prints the group names and the number of penguins in each group.
```
for name, group in grouped:
print(name, len(group))
```
We can use the `GroupBy` object to extract a column, like flipper length, from each group and compute its mean.
```
varname = 'Flipper Length (mm)'
for name, group in grouped:
print(name, group[varname].mean())
```
We can also use it to display the distribution of values in each group.
```
for name, group in grouped:
sns.kdeplot(group[varname], label=name)
decorate(xlabel=varname,
ylabel='PDF',
title='Distributions of features')
```
`kdeplot` uses [kernel density estimation](https://en.wikipedia.org/wiki/Kernel_density_estimation) to make a smooth histogram of the values.
It looks like we can use flipper length to identify Gentoo penguins, but not to distinguish the other two species.
To make these steps easier to reuse, I'll wrap them a function.
```
def make_kdeplots(df, varname):
"""Make a KDE plot for each species.
df: DataFrame
varname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
grouped = df.groupby('Species2')
for name, group in grouped:
sns.kdeplot(group[varname], label=name)
decorate(xlabel=varname,
ylabel='PDF',
title='Distributions of features')
```
Now we can use it to explore other features, like culmen length.
```
make_kdeplots(df, 'Culmen Length (mm)')
```
It looks like we can use culmen length to identify Adelie penguins.
**Exercise:** Use `make_kdeplots` to display the distributions of one of the other two features:
* `'Body Mass (g)'`
* `'Culmen Depth (mm)'`
```
# Solution goes here
```
## Scatter plot
If we can identify Gentoo penguins by flipper length and Adelie penguins by culmen length, maybe we can combine these variables to identify all three species.
I'll start by making a scatter plot of the data.
```
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
var3 = 'Culmen Depth (mm)'
var4 = 'Body Mass (g)'
for name, group in grouped:
plt.plot(group[var1], group[var2],
'o', alpha=0.4, label=name)
decorate(xlabel=var1, ylabel=var2)
```
Using those two features, we can divide the penguins into clusters with not much overlap.
We're going to make lots of scatter plots, so let's wrap that code in a function.
And we'll generalize it to take `by` as a parameter, so we can group by any column, not just `Species2`.
```
def scatterplot(df, var1, var2, by):
"""Make a scatter plot.
df: DataFrame
var1: string column name, x-axis
var2: string column name, y-axis
by: string column name, groupby
"""
grouped = df.groupby(by)
for name, group in grouped:
plt.plot(group[var1], group[var2],
'o', alpha=0.4, label=name)
decorate(xlabel=var1, ylabel=var2)
```
Here's a scatter plot of flipper and culmen length for the three species.
```
scatterplot(df, var1, var2, 'Species2')
```
**Exercise:** Make a scatter plot using any other pair of variables.
```
# Solution goes here
```
We can think of these scatter plots as 2-D views of a 4-D feature space.
## Clear the labels
Now, let's pretend we don't know anything about the different species, and we'll see whether we can rediscover these clusters.
To see what the problem looks like, I'll add a column of labels to the `DataFrame` and set it to 0 for all penguins.
```
df['labels'] = 0
```
Now if we group by label, there's only one big cluster.
```
scatterplot(df, var1, var2, 'labels')
```
Let's see what happens if we run k-means clustering on this data.
## Clustering
First I'll use the implementation of k-means in scikit-learn; then we'll write our own.
In the dataset, we have 344 penguins and 19 variables.
```
df.shape
```
But some of the variables are NaN, which indicates missing data.
So I'll use `dropna` to drop any rows that have missing data for the two features we're going to use, flipper length and culmen length.
```
features = [var1, var2]
data = df.dropna(subset=features).copy()
data.shape
```
I'll extract just those two columns as a NumPy array.
```
M = data[features].to_numpy()
```
Now we can use `KMeans` to identify the clusters.
`n_clusters` indicates how many cluster we want; this parameter is the $k$ the algorithm is named for.
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3).fit(M)
type(kmeans)
```
The result is an object that contains
* Labels that indicates which cluster each penguin is assigned to, and
* The centers of the clusters.
I'll store the labels as a columns in `data`.
```
data['labels'] = kmeans.labels_
data['labels']
```
That way we can use `scatterplot` to show the clusters.
```
scatterplot(data, var1, var2, 'labels')
```
The `KMeans` object also contains the centers of the clusters as coordinate pairs in a NumPy array.
```
kmeans.cluster_centers_
```
To plot the centers, I'll transpose the array and assign the columns to `x` and `y`:
```
xs, ys = np.transpose(kmeans.cluster_centers_)
```
I'll plot the centers with x's and o's.
```
options = dict(color='C3', ls='none', mfc='none')
plt.plot(xs, ys, marker='o', ms=15, **options)
plt.plot(xs, ys, marker='x', ms=10, **options);
```
As usual, let wrap that up in a function.
```
def plot_centers(centers, color='C3'):
"""Plot cluster centers.
centers: array with x and y columns
color: string color specification
"""
xs, ys = np.transpose(centers)
options = dict(color=color, ls='none', mfc='none')
plt.plot(xs, ys, marker='o', ms=15, **options)
plt.plot(xs, ys, marker='x', ms=10, **options)
```
Now let's pull it all together.
```
scatterplot(data, var1, var2, 'labels')
plot_centers(kmeans.cluster_centers_)
```
This figure shows the data, color-coded by assigned label, and the centers of the clusters.
It looks like k-means does a reasonable job of rediscovering the species, but with some confusion between Adelie (lower left) and Chinstrap (top center).
As a reminder, here are the right answers:
```
scatterplot(data, var1, var2, 'Species2')
```
Note that the color coding for the clusters is not consistent because the centers we get from k-means are in a random order.
**Exercise:** Here's the code from this section all in one place. Modify it to use any two features and see what the results look like.
```
features2 = [var1, var2]
data2 = df.dropna(subset=features2).copy()
M2 = data2[features2].to_numpy()
kmeans2 = KMeans(n_clusters=3).fit(M2)
data2['labels'] = kmeans2.labels_
scatterplot(data2, var1, var2, 'labels')
```
## Implementing k-means
Now let's see how the algorithm works. At a high level, there are three steps:
1. Choose $k$ random points in the dataset as initial centers.
2. Assign each point in the dataset to the closest center.
3. Compute new centers by calculating the "center of mass" in each cluster.
Then you repeat steps 2 and 3 until the centers stop moving.
To select random points from the dataset, I'll use `np.random.choice` to select three indices.
```
index = np.random.choice(len(M), size=3)
index
```
And then use the indices to select rows from the dataset.
```
centers = M[index]
centers
```
I'll wrap that in a function:
```
def choose_random_start(M, k):
"""Choose k random elements of M.
M: NumPy array with rows of coordinates
k: number of centers to choose
returns: NumPy array
"""
index = np.random.choice(len(M), size=k)
centers = M[index]
return centers
```
Here's how we use it.
```
centers = choose_random_start(M, 3)
centers
```
And here's what the centers look like on the scatterplot.
```
data['labels'] = 0
scatterplot(data, var1, var2, 'labels')
plot_centers(centers)
```
The next step is to assign each point to the closest center. So we need to compute the distance between each point and each of the centers.
## Compute distances
To demonstrate the process, I'll pick just one of the centers.
```
center_x, center_y = centers[0]
center_x, center_y
```
Now it will be convenient to have the `x` and `y` coordinates in separate arrays. I can do that with `np.transpose`, which turns the columns into rows; then I can assign the rows to `x` and `y`.
```
x, y = np.transpose(M)
x.shape
```
Along the x-axis, the distance from each point to this center is `x-center_x`.
Along the y-axis, the distance is `y-center_y`.
The distance from each point to the center is the hypotenuse of the triangle, which I can compute with `np.hypot`:
```
distances = np.hypot(x-center_x, y-center_y)
distances.shape
```
The result is an array that contains the distance from each point to the chosen center.
To see if we got it right, I'll plot the center and the points, with the size of the points proportional to distance.
```
plt.plot(center_x, center_y, 'rx', markersize=10)
plt.scatter(x, y, s=distances)
decorate(xlabel=var1, ylabel=var2)
```
At least visually, it seems like the size of the points is proportional to their distance from the center.
So let's put those steps into a function:
```
def compute_distances(M, center):
"""Compute distances to the given center.
M: NumPy array of coordinates
center: x, y coordinates of the center
returns: NumPy array of float distances
"""
x, y = np.transpose(M)
center_x, center_y = center
distances = np.hypot(x-center_x, y-center_y)
return distances
```
We can use the function to make a list of distance arrays, one for each center.
```
distance_arrays = [compute_distances(M, center)
for center in centers]
len(distance_arrays)
```
## Labeling the points
The next step is to label each point with the index of the center it is closest to.
`distance_arrays` is a list of arrays, but we can convert it to a 2-D array like this:
```
A = np.array(distance_arrays)
A.shape
```
`A` has one row for each center and one column for each point.
Now we can use `np.argmin` to find the shortest distance in each column and return its index.
```
data['labels'] = np.argmin(A, axis=0)
data['labels']
```
The result is an array of indices in the range `0..2`, which we assign to a column in `data`.
Let's put these steps in a function.
```
def compute_labels(M, centers):
"""Label each point with the index of the closest center.
M: NumPy array of coordinates
centers: array of coordinates for the centers
returns: array of labels, 0..k-1
"""
distance_arrays = [compute_distances(M, center)
for center in centers]
A = np.array(distance_arrays)
labels = np.argmin(A, axis=0)
return labels
```
We can call it like this:
```
data['labels'] = compute_labels(M, centers)
```
And here are the results.
```
scatterplot(data, var1, var2, 'labels')
plot_centers(centers)
```
If we get lucky, we might start with one point near the center of each cluster.
But even if we are unlucky, we can improve the results by recentering.
## Find new centers
The last step is to use the labels from the previous step to compute the center of each cluster.
I'll start by using `groupby` to group the points by label.
```
grouped = data.groupby('labels')
for name, group in grouped:
print(name, len(group))
```
We can use the `GroupBy` object to select the columns we're using and compute their means.
```
data.groupby('labels')[features].mean()
```
The result is a `DataFrame` that contains the central coordinates of each cluster.
I'll put these steps in a function.
```
def compute_new_centers(data, features):
"""Compute the center of each cluster.
data: DataFrame
features: list of string column names
"""
means = data.groupby('labels')[features].mean()
return means.to_numpy()
```
The return value is a NumPy array that contains the new centers.
```
new_centers = compute_new_centers(data, features)
new_centers
```
Here's what it looks like with the old centers in gray and the new centers in red.
```
scatterplot(data, var1, var2, 'labels')
plot_centers(centers, color='gray')
plot_centers(new_centers, color='C3')
```
## The k-means algorithm
Now here's the whole algorithm in one function.
```
def k_means(data, features, k):
"""Cluster by k means.
data: DataFrame
features: list of string column names
k: number of clusters
returns: array of centers
"""
M = data[features].to_numpy()
centers = choose_random_start(M, k)
for i in range(15):
data['labels'] = compute_labels(M, centers)
centers = compute_new_centers(data, features)
return centers
```
And here's what the results look like after 15 iterations.
```
centers = k_means(data, features, 3)
scatterplot(data, var1, var2, 'labels')
plot_centers(centers, color='C3')
```
The results are (as far as I can see) identical to what we got from the scikit-learn implementation.
```
kmeans = KMeans(n_clusters=3).fit(M)
data['labels'] = kmeans.labels_
scatterplot(data, var1, var2, 'labels')
plot_centers(kmeans.cluster_centers_)
```
## Animation
Here's an animation that shows the algorithm in action.
```
from time import sleep
from IPython.display import clear_output
interval = 1
centers = choose_random_start(M, k=3)
plt.figure()
for i in range(10):
# label and scatter plot
data['labels'] = compute_labels(M, centers)
scatterplot(data, var1, var2, 'labels')
plot_centers(centers, color='gray')
# compute new centers and plot them
new_centers = compute_new_centers(data, features)
plot_centers(new_centers)
centers = new_centers
# show the plot, wait, and clear
plt.show()
sleep(interval)
clear_output(wait=True)
```
**Exercise:** Run the previous cell a few times. Do you always get the same clusters?
## Number of clusters
All of this is based on the assumption that you know how many clusters you are looking for, which is true for some applications, but not always.
Let's see what goes wrong if you ask for too many clusters, or too few.
**Exercise:** Run the following code with different values of `n_clusters` and see what the results look like.
```
kmeans = KMeans(n_clusters=3).fit(M)
data['labels'] = kmeans.labels_
scatterplot(data, var1, var2, 'labels')
plot_centers(kmeans.cluster_centers_)
```
## Standarization
One of the problems with the results we have seen so far is that the lines between the clusters are mostly vertical.
That's because the range of values is wider for flipper length than culmen length, about 60 mm compared to 28 mm.
```
M.max(axis=0) - M.min(axis=0)
M.std(axis=0)
```
When we compute the distance from each point to each center, the distances in the $x$ direction tend to dominate.
This is a common problem with algorithms that are based on distance in multidimensional space.
It is such a common problem that there is a common solution: [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling).
The goal of feature scaling is to transform the features so the distances along each axis are comparable.
One version of feature scaling is "standardization", which consists of
1. Subtracting the mean from each feature, and
2. Dividing through by the standard deviation.
Here's how we can do it with the features in `M`:
```
means = M.mean(axis=0)
means
stds = M.std(axis=0)
stds
M_std = (M - means) / stds
```
Let's see what happens if we run the algorithm again with standardized features.
Notice that I have to transform the centers back before plotting them.
```
kmeans = KMeans(n_clusters=3).fit(M_std)
data['labels'] = kmeans.labels_
scatterplot(data, var1, var2, 'labels')
centers = kmeans.cluster_centers_ * stds + means
plot_centers(centers)
```
That looks a lot better! Again, here are the actual species for comparison.
```
scatterplot(data, var1, var2, 'Species2')
```
scikit-learn provides `StandardScaler`, which does the same thing.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(M)
M_std = scaler.transform(M)
```
And `scaler` provides `inverse_transform`, which we can use to transform the centers back.
```
kmeans = KMeans(n_clusters=3).fit(M_std)
data['labels'] = kmeans.labels_
scatterplot(data, var1, var2, 'labels')
centers = scaler.inverse_transform(kmeans.cluster_centers_)
plot_centers(centers)
```
## Summary
The k-means algorithm does unsupervised clustering, which means that we don't tell it where the clusters are; we just provide the data and ask it to find a given number of clusters.
In this notebook, we asked it to find clusters in a group of penguins based on two features, flipper length and culmen length. The clusters it finds reflect the species in the dataset, especially if we standardize the data.
In this example we used only two features, because that makes it easy to visualize the results. But k-means extends easily to any number of dimensions (see the exercise below).
So, what is this good for?
Well, [Wikipedia provides this list of applications](https://en.wikipedia.org/wiki/Cluster_analysis#Applications). Applying clustering analysis to these applications, I see a few general ideas:
* From an engineering point of view, clustering can be used to automate some kinds of analysis people do, which might be faster, more accurate, or less expensive. And it can work with large datasets and high numbers of dimensions that people can't handle.
* From a scientific point of view, clustering provides a way to test whether the patterns we see are in the data or in our minds.
This second point is related to old philosophical questions about the [nature of categories](https://plato.stanford.edu/entries/natural-kinds/). Putting things into categories seems to be a natural part of how humans think, but we have to wonder whether the categories we find truly "carve nature at its joints", as [Plato put it](https://mitpress.mit.edu/books/carving-nature-its-joints).
If a clustering algorithm finds the same "joints" we do, we might have more confidence they are not entirely in our minds.
**Exercise:** Use the scikit-learn implementation of k-means to find clusters using all four features (flipper length, culmen length and depth, body mass). How do the results compare to what we got with just two features?
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
# Sample grouping
We are going to linger into the concept of sample groups. As in the previous
section, we will give an example to highlight some surprising results. This
time, we will use the handwritten digits dataset.
```
from sklearn.datasets import load_digits
digits = load_digits()
data, target = digits.data, digits.target
```
We will recreate the same model used in the previous exercise:
a logistic regression classifier with preprocessor to scale the data.
```
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), LogisticRegression())
```
We will use the same baseline model. We will use a `KFold` cross-validation
without shuffling the data at first.
```
from sklearn.model_selection import cross_val_score, KFold
cv = KFold(shuffle=False)
test_score_no_shuffling = cross_val_score(model, data, target, cv=cv,
n_jobs=-1)
print(f"The average accuracy is "
f"{test_score_no_shuffling.mean():.3f} +/- "
f"{test_score_no_shuffling.std():.3f}")
```
Now, let's repeat the experiment by shuffling the data within the
cross-validation.
```
cv = KFold(shuffle=True)
test_score_with_shuffling = cross_val_score(model, data, target, cv=cv,
n_jobs=-1)
print(f"The average accuracy is "
f"{test_score_with_shuffling.mean():.3f} +/- "
f"{test_score_with_shuffling.std():.3f}")
```
We observe that shuffling the data improves the mean accuracy.
We could go a little further and plot the distribution of the testing
score. We can first concatenate the test scores.
```
import pandas as pd
all_scores = pd.DataFrame(
[test_score_no_shuffling, test_score_with_shuffling],
index=["KFold without shuffling", "KFold with shuffling"],
).T
```
Let's plot the distribution now.
```
import matplotlib.pyplot as plt
import seaborn as sns
all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7)
plt.xlim([0.8, 1.0])
plt.xlabel("Accuracy score")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Distribution of the test scores")
```
The cross-validation testing error that uses the shuffling has less
variance than the one that does not impose any shuffling. It means that some
specific fold leads to a low score in this case.
```
print(test_score_no_shuffling)
```
Thus, there is an underlying structure in the data that shuffling will break
and get better results. To get a better understanding, we should read the
documentation shipped with the dataset.
```
print(digits.DESCR)
```
If we read carefully, 13 writers wrote the digits of our dataset, accounting
for a total amount of 1797 samples. Thus, a writer wrote several times the
same numbers. Let's suppose that the writer samples are grouped.
Subsequently, not shuffling the data will keep all writer samples together
either in the training or the testing sets. Mixing the data will break this
structure, and therefore digits written by the same writer will be available
in both the training and testing sets.
Besides, a writer will usually tend to write digits in the same manner. Thus,
our model will learn to identify a writer's pattern for each digit instead of
recognizing the digit itself.
We can solve this problem by ensuring that the data associated with a writer
should either belong to the training or the testing set. Thus, we want to
group samples for each writer.
Here, we will manually define the group for the 13 writers.
```
from itertools import count
import numpy as np
# defines the lower and upper bounds of sample indices
# for each writer
writer_boundaries = [0, 130, 256, 386, 516, 646, 776, 915, 1029,
1157, 1287, 1415, 1545, 1667, 1797]
groups = np.zeros_like(target)
lower_bounds = writer_boundaries[:-1]
upper_bounds = writer_boundaries[1:]
for group_id, lb, up in zip(count(), lower_bounds, upper_bounds):
groups[lb:up] = group_id
```
We can check the grouping by plotting the indices linked to writer ids.
```
plt.plot(groups)
plt.yticks(np.unique(groups))
plt.xticks(writer_boundaries, rotation=90)
plt.xlabel("Target index")
plt.ylabel("Writer index")
_ = plt.title("Underlying writer groups existing in the target")
```
Once we group the digits by writer, we can use cross-validation to take this
information into account: the class containing `Group` should be used.
```
from sklearn.model_selection import GroupKFold
cv = GroupKFold()
test_score = cross_val_score(model, data, target, groups=groups, cv=cv,
n_jobs=-1)
print(f"The average accuracy is "
f"{test_score.mean():.3f} +/- "
f"{test_score.std():.3f}")
```
We see that this strategy is less optimistic regarding the model statistical
performance. However, this is the most reliable if our goal is to make
handwritten digits recognition writers independent. Besides, we can as well
see that the standard deviation was reduced.
```
all_scores = pd.DataFrame(
[test_score_no_shuffling, test_score_with_shuffling, test_score],
index=["KFold without shuffling", "KFold with shuffling",
"KFold with groups"],
).T
all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7)
plt.xlim([0.8, 1.0])
plt.xlabel("Accuracy score")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Distribution of the test scores")
```
As a conclusion, it is really important to take any sample grouping pattern
into account when evaluating a model. Otherwise, the results obtained will
be over-optimistic in regards with reality.
| github_jupyter |
```
import requests
import sqlalchemy
import xmltodict
from sqlalchemy import create_engine, MetaData
from collections import defaultdict
import datetime
class Capture(object):
def __init__(self,
schema,
database='projetocurio'
):
self.schema = schema
self.database = database
self.engine = self.connect_to_db()
self.meta = self.load_db_schema()
self.url = None
self.data = None
def connect_to_db(self):
return create_engine('postgresql://uploaddata:VgyBhu876%%%@104.155.150.247:5432/projetocurio')
def load_db_schema(self):
metadata = MetaData()
metadata.reflect(self.engine, schema='camara_v1')
return metadata
def request(self, url):
data = requests.get(url)
if data.status_code == 200:
self.data = data.text
else:
self.data = None
def xml_to_dict(self):
self.data = xmltodict.parse(self.data)
def to_default_dict(self, list_of_dic):
return [defaultdict(lambda: None, dic) for dic in list(list_of_dic)]
def capture_data(self, url):
self.request(url)
self.xml_to_dict()
def insert_data(self, list_of_dic, table):
table_string = self.schema + '.' + table
with self.engine.connect() as conn:
print('inserting data')
for dic in list_of_dic:
conn.execute(self.meta.tables[table_string].insert(), dic)
print('closing connection')
def update_data(self, list_of_dic, table, key, force_insert=True):
table_string = self.schema + '.' + table
table = self.meta.tables[table_string]
with self.engine.connect() as conn:
print('updating data')
for dic in list_of_dic:
if self.check_existing_date(table, key, dic[key]):
conn.execute(table.update(whereclause=table.c[key]==dic[key]),
dic)
else:
conn.execute(table.insert(),
dic)
print('closing connection')
def check_existing_date(self, table, key, value):
table_string = self.schema + '.' + table
table = self.meta.tables[table_string]
with capture.engine.connect() as conn:
res = conn.execute(table.select().where(table.c['key'] == value))
if len(res) > 0:
return True
else:
return False
a = capture.meta.tables['camara_v1.proposicoes']
with capture.engine.connect() as conn:
table = capture.meta.tables['camara_v1.proposicoes']
a = conn.execute(table.update(
whereclause=table.c.id_proposicao==23435), {'id_proposicao':23435})
b = table.c.id_proposicao==2343
table.c['id_proposicao']
with capture.engine.connect() as conn:
res = conn.execute(table.select().where(table.c.id_proposicao==2180))
list(res)
a.update()
def urls_generator(capture, base_url):
with capture.engine.connect() as conn:
result = list(conn.execute("select MAX(data_captura) from camara_v1.proposicoes_tramitadas_periodo"))
if result[0][0] is None:
dtInicio = datetime.datetime.strftime(datetime.datetime.now() - datetime.timedelta(days=1), '%d/%m/%Y')
dtFim = datetime.datetime.strftime(datetime.datetime.now(), '%d/%m/%Y')
else:
dtInicio = datetime.datetime.strftime(result[0][0], '%d/%m/%Y')
dtFim = datetime.datetime.strftime(datetime.datetime.now(), '%d/%m/%Y')
return base_url.format(dtInicio=dtInicio, dtFim=dtFim)
urls_generator(capture, base_url)
capture = Capture(schema='camara_v1',)
base_url = 'http://www.camara.leg.br/SitCamaraWS/Proposicoes.asmx/ObterVotacaoProposicao?tipo=PL&numero=1992&ano=2007'
capture.capture_data(base_url)
a = capture.data['proposicao']
b = a['Votacoes']['Votacao']
a.keys()
b[0].keys()
def from_api_to_db_votacao_orientacao(data_list, url, data_proposicao, data_votacao, id_proposicao, numero_captura):
func = lambda datum: dict(
id_proposicao= id_proposicao,
tipo_proposicao_sigla= data_proposicao['Sigla'],
numero_proposicao= data_proposicao['Numero'],
ano_proposicao= data_proposicao['Ano'],
resumo_votacao= data_votacao['@Resumo'],
data_votacao= data_votacao['@Data'],
hora_votacao= data_votacao['@Hora'],
objeto_votacao= data_votacao['@ObjVotacao'],
cod_sessao= data_votacao['@codSessao'],
sigla_partido= . datum['@Sigla'],
orientacao_partido= datum['@orientacao'].strip(),
data_captura= datetime.datetime.now(),
url_captura= url,
numero_captura= numero_captura
)
return map(func, data_list)
def from_api_to_db_votacao_deputado(data_list, url, data_proposicao, data_votacao, id_proposicao, numero_captura):
func = lambda datum: dict(
id_proposicao= id_proposicao,
tipo_proposicao_sigla= data_proposicao['Sigla'],
numero_proposicao= data_proposicao['Numero'],
ano_proposicao= data_proposicao['Ano'],
resumo_votacao= data_votacao['@Resumo'],
data_votacao= data_votacao['@Data'],
hora_votacao= data_votacao['@Hora'],
objeto_votacao= data_votacao['@ObjVotacao'],
cod_sessao= data_votacao['@codSessao'],
nome= datum['@Nome'],
ide_cadastro= datum['@ideCadastro'],
sigla_partido= datum['@Partido'],
uf= datum['@UF'],
voto= datum['@Voto'],
data_captura= datetime.datetime.now(),
url_captura= url,
numero_captura= numero_captura
)
return map(func, data_list)
b[0]['@Resumo']
b[0]['@Data']
b[0]['@Hora']
b[0]['@ObjVotacao']
b[0]['@codSessao']
b[0]['orientacaoBancada']['bancada']
b[0]['votos']['Deputado']
tipo_proposicao_sigla= capture.data['proposicao']['@tipo'].strip(),
numero_proposicao= capture.data['proposicao']['@numero'],
ano_proposicao= capture.data['proposicao']['@ano'],
ultimo_despacho_data= to_date(capture.data['proposicao']['UltimoDespacho']['@Data'], '%d/%m/%Y '),
ultimo_despacho= capture.data['proposicao']['UltimoDespacho']['#text'],
capture.data['proposicoes']['proposicao']
def force_list(obj):
if not isinstance(obj, list):
return [obj]
else:
return obj
def to_date(string, formats):
if string is not None:
return datetime.datetime.strptime(string, formats)
else:
return None
def from_api_to_db_obter_deputados_detalhes(data, url):
func = lambda datum: dict(
num_legislatura= datum['numLegislatura'],
email= datum['email'],
nome_profissao= datum['nomeProfissao'],
data_nascimento= to_date(datum['dataNascimento'], '%d/%m/%Y'),
data_falecimento= to_date(datum['dataFalecimento'],'%d/%m/%Y'),
uf_representacao_atual= datum['ufRepresentacaoAtual'],
situacao_na_legislatura_atual= datum['situacaoNaLegislaturaAtual'],
ide_cadastro= datum['ideCadastro'],
id_parlamentar_deprecated= datum['idParlamentarDeprecated'],
nome_civil= datum['nomeCivil'],
sexo= datum['sexo'],
partido_atual_id_partido= datum['partidoAtual']['idPartido'],
partido_atual_sigla= datum['partidoAtual']['sigla'],
partido_atual_nome= datum['partidoAtual']['nome'],
gabinete_numero= datum['gabinete']['numero'],
ganinete_anexo= datum['gabinete']['anexo'],
gabinete_telefone= datum['gabinete']['telefone'],
data_captura= datetime.datetime.now(),
url_captura= url
)
return map(func, data_list)
def from_api_to_db_obter_deputados_detalhes_comissoes(data, url, data_generic):
func = lambda datum: dict(
ide_cadastro= data_generic['ideCadastro'],
nome_parlamentar_atual= data_generic['nomeParlamentarAtual'],
partido_atual_nome= data_generic['partidoAtual']['nome'],
partido_atual_id_partido= data_generic['partidoAtual']['idPartido'],
uf_representacao_atual= data_generic['ufRepresentacaoAtual'],
id_orgao_legislativo_cd= datum['idOrgaoLegislativoCD'],
sigla_comissao= datum['siglaComissao'],
nome_comissao= datum['nomeComissao'],
condicao_membro= datum['condicaoMembro'],
data_entrada= to_date(datum['dataEntrada'], '%d/%m/%Y'),
data_saida= to_date(datum['dataSaida'], '%d/%m/%Y'),
data_captura= datetime.datetime.now(),
url_captura= url
)
return map(func, data_list)
def from_api_to_db_obter_deputados_detalhes_cargo_comissoes(data, url, data_generic):
func = lambda datum: dict(
ide_cadastro= data_generic['ideCadastro'],
nome_parlamentar_atual= data_generic['nomeParlamentarAtual'],
partido_atual_nome= data_generic['partidoAtual']['nome'],
partido_atual_id_partido= data_generic['partidoAtual']['idPartido'],
uf_representacao_atual= data_generic['ufRepresentacaoAtual'],
id_orgao_legislativo_cd= datum['idOrgaoLegislativoCD'],
sigla_comissao= datum['siglaComissao'],
nome_comissao= datum['nomeComissao'],
id_cargo= datum['idCargo'],
nome_cargo= datum['nomeCargo'],
data_entrada= to_date(datum['dataEntrada'], '%d/%m/%Y'),
data_saida= to_date(datum['dataSaida'], '%d/%m/%Y'),
data_captura= datetime.datetime.now(),
url_captura= url
)
return map(func, data_list)
def from_api_to_db_obter_deputados_detalhes_periodo_exercicio(data, url, data_generic):
func = lambda datum: dict(
ide_cadastro= data_generic['ideCadastro'],
nome_parlamentar_atual= data_generic['nomeParlamentarAtual'],
partido_atual_nome= data_generic['partidoAtual']['nome'],
partido_atual_id_partido= data_generic['partidoAtual']['idPartido'],
uf_representacao_atual= data_generic['ufRepresentacaoAtual'],
sigla_uf_representacao= datum['siglaUFRepresentacao'],
situacao_exercicio= datum['situacaoExercicio'],
data_inicio= to_date(datum['dataInicio'], '%d/%m/%Y'),
data_fim= to_date(datum['dataFim'], '%d/%m/%Y'),
id_causa_fim_exercicio= datum['idCausaFimExercicio'],
descricao_causa_fim_exercicio= datum['descricaoCausaFimExercicio'],
id_cadastro_parlamentar_anterior= datum['idCadastroParlamentarAnterior'],
data_captura= datetime.datetime.now(),
url_captura= url
)
return map(func, data_list)
def from_api_to_db_obter_deputados_detalhes_filiacao_partidaria(data, url, data_generic):
func = lambda datum: dict(
ide_cadastro= data_generic['ideCadastro'],
nome_parlamentar_atual= data_generic['nomeParlamentarAtual'],
partido_atual_nome= data_generic['partidoAtual']['nome'],
partido_atual_id_partido= data_generic['partidoAtual']['idPartido'],
uf_representacao_atual= data_generic['ufRepresentacaoAtual'],
id_partido_anterior= datum['idPartidoAnterior'],
sigla_partido_anterior= datum['siglaPartidoAnterior'],
nome_partido_anterior= datum['nomePartidoAnterior'],
id_partido_posterior= datum['idPartidoPosterior'],
sigla_partido_posterior= datum['siglaPartidoPosterior'],
nome_partido_posterior= datum['nomePartidoPosterior'],
data_filiacao_partido_posterior= to_date(datum['dataFiliacaoPartidoPosterior'],'%d/%m/%Y'),
hora_filiacao_partido_posterior= datum['horaFiliacaoPartidoPosterior'],
)
return map(func, data_list)
def from_api_to_db_obter_deputados_detalhes_historico_lider(data, url, data_generic):
func = lambda datum: dict(
ide_cadastro= data_generic['ideCadastro'],
nome_parlamentar_atual= data_generic['nomeParlamentarAtual'],
partido_atual_nome= data_generic['partidoAtual']['nome'],
partido_atual_id_partido= data_generic['partidoAtual']['idPartido'],
uf_representacao_atual= data_generic['ufRepresentacaoAtual'],
id_historico_lider= datum['idHistoricoLider'],
id_cargo_lideranca= datum['idCargoLideranca'],
descricao_cargo_lideranca= datum['descricaoCargoLideranca'],
num_ordem_cargo= datum['numOrdemCargo'],
data_designacao= to_date(datum['dataDesignacao'], '%d/%m/%Y'),
data_termino= to_date(datum['dataTermino'], '%d/%m/%Y'),
codigo_unidade_lideranca= datum['codigoUnidadeLideranca'],
sigla_unidade_lideranca= datum['siglaUnidadeLideranca'],
id_bloco_partido= datum['idBlocoPartido'],
data_captura= datetime.datetime.now(),
url_captura= url
)
return map(func, data_list)
capture.data['Deputados']['Deputado'].keys()
capture.data['Deputados']['Deputado'][0].keys()
capture.data['Deputados']['Deputado']['periodosExercicio']['periodoExercicio']
capture.data['Deputados']['Deputado']['historicoLider']['itemHistoricoLider']
capture.data['Deputados']['Deputado']['filiacoesPartidarias']['filiacaoPartidaria']
capture.data['Deputados']['Deputado']['periodosExercicio']['periodoExercicio']
# get the list of dict for this table
data_list = capture.data['deputados']['deputado']
#
data_list = capture.to_default_dict(data_list)
# make it rigth
data_list = from_api_to_db_obter_deputados(data, capture.url)
# insert it!
capture.insert_data(data_list, table='obter_deputados')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Class 8: Deterministic Time Series Models II
## Nonlinear First-Order Difference Equations
Recall the Solow growth model with no exogenous productivity growth (written in "per worker" terms):
\begin{align}
y_t & = Ak_t^{\alpha}\\
k_{t+1} & = i_t + (1-\delta)k_{t}\\
y_t & = c_t + i_t\\
c_t & = (1-s)y_t,
\end{align}
where $y_t$ is output per worker, $k_t$ is capital per worker, $c_t$ is consumption per worker, and $i_t$ is investment per worker. We've seen that by treating investment as an exogenous quantity, then the capital accumulation equation could be viewed as a linear first-order difference equation. But in the Solow model, investment is *not* exogenous and is equal to the savings rate times output per worker:
\begin{align}
i_t & = sy_t = sAk_t^{\alpha}
\end{align}
Therefore the equilibrium law of motion for capital per worker is:
\begin{align}
k_{t+1} & = sAk_t^{\alpha} + (1-\delta)k_{t}, \label{eqn:capital_solution}
\end{align}
which is a *nonlinear* first-order difference equation. In equilibrium capital in period $t+1$ is a *concave* (i.e., increasing at a decreasing rate) function of capital in period $t$. We can iterate on the nonlinear law of motion just like we iterated on the linear difference equation.
Let's suppose the following values for the simulation:
| $k_0$ | $s$ | $A$ | $\alpha$ | $\delta$ | $T$ |
|-------|-----|------|----------|-----------|------|
| 10 | 0.1 | 10 | 0.35 | 0.1 | 101 |
Where $T$ is the total number of simulation periods (i.e., $t$ will range from $0$ to $100$).
```
# Create variables 'k0', 's', 'A', 'alpha','delta','T' to store parameter values for the simulation
# Initialize capital per worker variable 'capital' as an array of zeros with length T
# Set first value of capital equal to k0
# Iterate over t in range(T-1) to update subsequent values in the capital array
# Construct a plot of simulated capital per worker
```
It looks like capital per worker is converging toward a steady state value near $35$. You can verify that the steady state is in fact $34.55$.
So computing nonlinear difference equations is essentially the same as computing linear differrence equations. However, establishing the stability properties of nonlinear difference equations is more involved and we'll skip that. But in case you're interested, I've included an optional discussion at the bottom of this notebook.
## Systems of Difference Equations
Most macroeconomic models are dynamic *systems* of equations. They are collections of several equations that simultaneously determine several variables. Working with them can be a little intimidating, but in principle, it's similar to working with a single dynamic equation.
### Example: Solow Growth Model Without Exogenous Growth
The basic Solow growth model is a system of four equations that determine values of four variables $k_t$, $y_t$, $c_t$, and $i_t$. In the Solow model, capital per worker is *state variable* because the the value of $k_t$ was actually determined in period $t-1$. In fact, in the Solow model without exogenous population of productivity growth, capital is the *only* state variable. That means that if you know the value of capital per worker, then you can compute all of the other quantities. The nonstate variables are called *control* variables.
to make this point clear, here is the Solow growth model *solved* in terms of $k_t$:
\begin{align}
y_t & = Ak_t^{\alpha}\\
i_t & = sAk_t^{\alpha}\\
c_t & = (1-s)Ak_t^{\alpha}\\
k_{t+1} & = sAk_t^{\alpha} + (1-\delta)k_{t}.
\end{align}
To simulate all of the endogenous variables, do these two steps:
1. Simulate an array of values for $k_t$
2. Compute the other variables using the array of $k_t$ values.
Since we've already simulated capital, we can just compute simulated values for $y_t$, $c_t$, and $i_t$ right away.
```
# Create variables 'output', 'consumption', and 'investment' equal to the respective variables in the model
# Construct a plot of simulated output per worker, consumption per worker, investment per worker, capital per worker.
# Create legend. Use ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) to put legend outside of axis
# Add grid if you want
```
Of course, if we want to compute multiple simulations, then we will want to write a function to compute the simulated values so that we don't have to repeat ourselves. The function in the following cell returns a `DataFrame` with columns containing simulated values of capital per worker, output per worker, consumption per worker, and investment per worker.
```
# Define a function that returns a DataFrame of simulated value from the Solow model with no exogenous growth. CELL PROVIDED
def solow_no_exo_growth(s,A,alpha,delta,T,k0):
'''Function for computing a simulation of the Solow growth model without exogenous growth.
The model is assmed to be in "per-worker" quantities:
y[t] = A*k[t]^alpha
k[t+1] = i[t] + (1-delta)*k[t]
y[t] = c[t] + i[t]
c[t] = (1-s)*y[t]
Args:
s (float): Saving rate
A (float): TFP
alpha (float): Capital share in Cobb-Dougla production function
delta (float): Capital depreciation rate
T (int): Number of periods to simulate
k0 (float): Initial value of capital per worker
Returns:
Pandas DataFrame
'''
# Initialize capital per worker values
capital = np.zeros(T)
# Set first value of capital per worker equal to k0
capital[0] = k0
# Iterate over t in range(T-1) to update subsequent values in the capital array
for t in range(T-1):
capital[t+1] = s*A*capital[t]**alpha + (1-delta)*capital[t]
# Compute the values of the other variables
output = A*capital**alpha
consumption = (1-s)*output
investment = s*output
# Put simulated data into a DataFrame
df = pd.DataFrame({'output':output,'consumption':consumption,'investment':investment,'capital':capital})
# Return the simulated data
return df
```
Use the function `solow_no_exo_growth()` to simulate the Solow model with the same parameters that we've been using.
```
# Simulate the model and store results in a variable called 'df'
# Print the first five rows of df
```
Now, use the function `solow_no_exo_growth()` to simulte the model for five different initial values of capital per worker: $k_0 = 10, 20, 30, 40, 50$. Plot the trajectories for $y_t$ together.
```
# Create variable 'initial_ks' that store the five initial values of k[t]
# Create a figure and axis
# Iterate over k0 in initial_ks and plot
# Add title
# Create legend. Use ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) to put legend outside of axis
# Add grid if you want
```
### Solow Model with Exogenous Labor
Now consider the Solow growth model with exogenous labor growth. That is, let $L_t$ denote the quantity of labor and suppose that the population (and therefore the labor supply) grows at the constant rate $n$. This means:
\begin{align}
L_{t+1} & = (1+n)L_t,
\end{align}
where the intial value of labor $L_0$ must be given. The rest of the model in aggregate terms is:
\begin{align}
Y_t & = AK_t^{\alpha}L_t^{1-\alpha}\\
K_{t+1} & = I_t + (1-\delta)K_{t}\\
Y_t & = C_t + I_t\\
C_t & = (1-s)Y_t,
\end{align}
where the intial value of capital $K_0$ is also given.
Since capital *and* labor are both determined in the previous period by their respective laws of motion, the Solow model with exogenous labor growth has *two* state variables: $K_t$ and $L_t$. We can solve the capital law of motion for capital in terms of only the variables capital and labor and so the two equations that determine how the state of the economy evolves are:
\begin{align}
K_{t+1} & = sAK_t^{\alpha}L_t^{1-\alpha} + (1-\delta)K_{t}\\
L_{t+1} & = (1+n)L_t
\end{align}
If we iterate on these two to compute simulations of $K_t$ and $L_t$, then we can compute $Y_t$, $C_t$, and $I_t$ easily.
Let's suppose the following values for a simulation:
| $L_0$ | $n$ | $K_0$ | $s$ | $A$ | $\alpha$ | $\delta $ | $T$ |
|-------|------|-------|------|------|----------|-----------|-----|
| 1 | 0.01 | 10 | 0.1 | 10 | 0.35 | 0.1 | 101 |
Compute simulated values for $K_t$ and $L_t$ and use those values to compute and plot simulated series for output $Y_t$ and output *per worker* $Y_t/L_t$ side-by-side in the same figure.
```
# Create variables 'K0', 'L0', 'n', 's', 'A', 'alpha','delta','T' to store parameter values for the simulation
# Initialize capital variable 'capital' as an array of zeros with length T
# Set first value of capital equal to Kdd0
# Initialize the labor variable 'labor' as an array of zeros with length T
# Set first value of capital equal to L0
# Iterate over t in range(T-1) to update subsequent values in the capital and labor arrays
# Create variables 'output' and 'output_pw' that store output and output per worker
# Create figure
# Construct a plot of simulated output
# Construct a plot of simulated output per worker
```
Because of exogenous labor growth, there is long-run growth in output but not output per worker. The function in the cell below simulates the Solow model with exogenous labor growth.
```
# Define a function that returns a DataFrame of simulated value from the Solow model with exogenous labor growth. CELL PROVIDED
def solow_no_exo_growth(s,A,alpha,delta,n,T,K0,L0):
'''Function for computing a simulation of the Solow growth model with exogenous labor and TFP growth.
Y[t] = A*K[t]^alpha*L[t]^(1-alpha)
K[t+1] = I[t] + (1-delta)*K[t]
Y[t] = C[t] + I[t]
C[t] = (1-s)*Y[t]
L[t+1] = (1+n)*L[t]
Args:
s (float): Saving rate
A (float): TFP
alpha (float): Capital share in Cobb-Douglas production function
delta (float): Capital depreciation rate
n (float): Labor growth rate
T (int): Number of periods to simulate
k0 (float): Initial value of capital per worker
L0 (float): Initial labor
Returns:
Pandas DataFrame
'''
# Initialize capital values
capital = np.zeros(T)
# Set first value of capital equal to K0
capital[0] = K0
# Initialize labor values
labor = np.zeros(T)
# Set first value of labor equal to L0
labor[0] = L0
# Iterate over t in range(T-1) to update subsequent values in the capital and labor arrays
for t in range(T-1):
capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
labor[t+1] = (1+n)*labor[t]
# Compute the values of the other aggregate variables
output = A*capital**alpha*labor**(1-alpha)
consumption = (1-s)*output
investment = s*output
# Compute the values of the "per worker" variables
capital_pw = capital/labor
output_pw = output/labor
consumption_pw = consumption/labor
investment_pw = investment/labor
# Put simulated data into a DataFrame
df = pd.DataFrame({'output':output,
'consumption':consumption,
'investment':investment,
'capital':capital,
'output_pw':output_pw,
'consumption_pw':consumption_pw,
'investment_pw':investment_pw,
'capital_pw':capital_pw})
# Return the simulated data
return df
```
Use the function `solow_no_exo_growth()` to replicate the previous simulation and plots.
```
# Replicate the previous simulation exercise using the solow_no_exo_growth() function. CELL PROVIDED
# Simulate the model
df = solow_no_exo_growth(s,A,alpha,delta,n,T,K0,L0)
# Create a figure
fig = plt.figure(figsize=(12,4))
# Construct a plot of simulated output
ax = fig.add_subplot(1,2,1)
ax.plot(df['output'],lw=3,alpha=0.75)
ax.set_title('Ouput')
ax.grid()
# Construct a plot of simulated output per worker
ax = fig.add_subplot(1,2,2)
ax.plot(df['output_pw'],lw=3,alpha=0.75)
ax.set_title('Ouput per Worker')
ax.grid()
```
## Stability of Nonlinear First-Order Difference Equations (Optional)
Unlike the linear first-order difference equation, stability (i.e., whether the process diverges to infinity in absolute value) of nonlinear first-order difference equations is less straightforward to establish. In fact, proving or disproving *global stability* -- Will the process remain finite for *any* initial condition? -- is particularly challenging. An easier task is to establish the existience of *local stability*: that the process is stable in a *neighborhood* of a given point. The idea is to use the first-order Taylor series approximation (https://en.wikipedia.org/wiki/Taylor_series) to approximate the nonlinear equation with a linear one. An example using the Solow model will make this easier to explain.
Let $k^*$ denote the steady state of capital per worker in the Solow model with no exogenous growth. Then:
\begin{align}
k^* & = \left(\frac{sA}{\delta}\right)^{\frac{1}{1-\alpha}}
\end{align}
The first-order Taylor series approximation to the capital law of motion around steady state capital per worker is:
\begin{align}
k_{t+1} & \approx \left[ sA\left(k^*\right)^{\alpha} + (1-\delta)k^*\right] + \left[\alpha sA\left(k^*\right)^{\alpha-1} + 1-\delta\right]\left(k_t - k^*\right). \label{eqn:capital_approx}
\end{align}
Equation (\ref{eqn:capital_approx}) is a linear first-order difference equation. As long as the coefficient on $k_t$,
\begin{align}
\alpha sA\left(k^*\right)^{\alpha-1} + 1-\delta,
\end{align}
is less than 1 in absolute value, then the approximated model is stable and we say that $k_t$ is stable in a neighborhood of the steady state $k^*$. Let's compute this coefficient for the Solow model using the same parameterization we used earlier:
| $s$ | $A$ | $\alpha$ | $\delta $ |
|-----|------|----------|-----------|
| 0.1 | 10 | 0.35 | 0.1 |
```
# Parameters
# Computer steady state
# Compute coefficient
```
So steady state capital per worker is about 35 and the coefficient on $k_t$ in the approximation is 0.935. So the Solow model as we have parameterized it is stable in the neighborhood of $k^*$. Can you find the upper bound on $\alpha$ for which the model will *not* be stable given $A = 10$, $s = 0.1$ and $\delta = 0.1$?
An obvious question is: How good is the linear approximation? It turns out that for the capital law of motion in the Solow model, the linear approximation is really good. Let's plot the exact law of motion and the approximated law of motion for $k_t \in[0,100]$.
```
# Create array of values for k[t]
# Compute k[t+1] exactly using the law of motion for capital per worker
# Compute k[t+1] approximately using the approximation to the law of motion for capital per worker
# Create figure
# Plot exact and approximate k[t+1] against for k[t]
```
The exact law of motion and the approximated law of motion intersect at the steady state and are hardly distinguishable elsewhere. You can see the difference between them only by zooming in on the extreme values.
```
# Create figure
# Plot exact and approximate k[t+1] against for k[t] between 0 and 10
# Plot exact and approximate k[t+1] against for k[t] between 80 and 100
```
For $k_t$ near zero, the approximated law of motion is greater than the exact one but not by much. For $k_t$ near 100, the two laws of motion are still really close to each other. The point is that we can be confident that in the Solow growth model, capital will tend to converge toward the steady state regardless of its initial value.
| github_jupyter |
<a href="https://colab.research.google.com/github/hc07180011/testing-cv/blob/main/colab/model1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Setup
```
!git clone https://github.com/hc07180011/testing-cv.git
%cd testing-cv/flicker_detection/flicker_detection/
%pip install -r requirements.txt
%pip install -r requirements_dev.txt
%pip install tensorflow-addons
%pip install coloredlogs rich
!pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110
```
## Mount drive
```
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
from google.colab import drive
drive.mount('/content/drive',force_remount=True)
```
## Seed everything first
```
import tensorflow as tf
tf.keras.utils.set_random_seed(1)
tf.config.experimental.enable_op_determinism()
```
## logging.py
```
import sys
import logging
from rich.logging import RichHandler
def init_logger() -> None:
logger = logging.getLogger("rich")
FORMAT = "%(name)s[%(process)d] " + \
"%(processName)s(%(threadName)s) " + \
"%(module)s:%(lineno)d %(message)s"
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(
FORMAT,
datefmt="%Y%m%d %H:%M:%S"
)
logging.basicConfig(
level="NOTSET", format=FORMAT, handlers=[RichHandler()]
)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
logging.info("Initializing ok.")
```
## keras.py
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve, roc_curve, auc,roc_auc_score
from tensorflow.keras import backend as K
logging.getLogger("matplotlib").setLevel(logging.WARNING)
logging.getLogger("tensorflow").setLevel(logging.WARNING)
plots_folder = "/content/drive/MyDrive/google_cv/flicker_detection/plots/"
"""
keras metrics api:
https://keras.io/api/metrics/
custom sensitivity specificity:
https://stackoverflow.com/questions/55640149/error-in-keras-when-i-want-to-calculate-the-sensitivity-and-specificity
custom auc:
https://stackoverflow.com/questions/41032551/how-to-compute-receiving-operating-characteristic-roc-and-auc-in-keras
"""
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall_keras = true_positives / (possible_positives + K.epsilon())
return recall_keras
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision_keras = true_positives / (predicted_positives + K.epsilon())
return precision_keras
def specificity(y_true, y_pred):
tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
return tn / (tn + fp + K.epsilon())
def negative_predictive_value(y_true, y_pred):
tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))
return tn / (tn + fn + K.epsilon())
def f1(y_true, y_pred):
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
return 2 * ((p * r) / (p + r + K.epsilon()))
# TODO
def auroc(y_true, y_pred):
"""
https://www.codegrepper.com/code-examples/python/auc+callback+keras
"""
if tf.math.count_nonzero(y_true) == 0:
return tf.cast(0.0, tf.float32)
return tf.numpy_function(roc_auc_score, (y_true, y_pred), tf.float32)
def fbeta(y_true, y_pred, beta=2):
y_pred = K.clip(y_pred, 0, 1)
tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)), axis=1)
fp = K.sum(K.round(K.clip(y_pred - y_true, 0, 1)), axis=1)
fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)), axis=1)
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
num = (1 + beta ** 2) * (p * r)
den = (beta ** 2 * p + r + K.epsilon())
return K.mean(num / den)
def matthews_correlation_coefficient(y_true, y_pred):
tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
tn = K.sum(K.round(K.clip((1 - y_true) * (1 - y_pred), 0, 1)))
fp = K.sum(K.round(K.clip((1 - y_true) * y_pred, 0, 1)))
fn = K.sum(K.round(K.clip(y_true * (1 - y_pred), 0, 1)))
num = tp * tn - fp * fn
den = (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)
return num / K.sqrt(den + K.epsilon())
def equal_error_rate(y_true, y_pred):
n_imp = tf.math.count_nonzero(tf.equal(y_true, 0), dtype=tf.float32) + tf.constant(K.epsilon())
n_gen = tf.math.count_nonzero(tf.equal(y_true, 1), dtype=tf.float32) + tf.constant(K.epsilon())
scores_imp = tf.boolean_mask(y_pred, tf.equal(y_true, 0))
scores_gen = tf.boolean_mask(y_pred, tf.equal(y_true, 1))
loop_vars = (tf.constant(0.0), tf.constant(1.0), tf.constant(0.0))
cond = lambda t, fpr, fnr: tf.greater_equal(fpr, fnr)
body = lambda t, fpr, fnr: (
t + 0.001,
tf.divide(tf.math.count_nonzero(tf.greater_equal(scores_imp, t), dtype=tf.float32), n_imp),
tf.divide(tf.math.count_nonzero(tf.less(scores_gen, t), dtype=tf.float32), n_gen)
)
t, fpr, fnr = tf.while_loop(cond, body, loop_vars, back_prop=False)
eer = (fpr + fnr) / 2
return eer
class Model:
"""
callbacks:
https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
"""
def __init__(
self,
model: tf.keras.models.Sequential,
loss: str,
optimizer: tf.keras.optimizers,
metrics = tuple((
# "accuracy",
# precision,
# recall,
f1,
# auroc,
# fbeta,
# specificity,
# negative_predictive_value,
# matthews_correlation_coefficient,
# equal_error_rate
)),
summary=True
) -> None:
self.model = model
self.model.compile(
loss=loss,
optimizer=optimizer,
metrics=metrics
)
if summary:
print(self.model.summary())
def train(
self,
X_train: np.array,
y_train: np.array,
epochs: int,
validation_split: float,
batch_size: int,
model_path: str = "/content/drive/MyDrive/google_cv/flicker_detection/models/model1.h5",
monitor: str = "val_f1",
mode: str = "max"
) -> None:
self.history = self.model.fit(
X_train, y_train,
epochs=epochs,
validation_split=validation_split,
batch_size=batch_size,
callbacks=[
tf.keras.callbacks.ModelCheckpoint(
model_path,
save_best_only=True,
monitor=monitor,
mode=mode
)
]
)
def plot_history(self, key: str, title=None) -> None:
plt.figure(figsize=(16, 4), dpi=200)
plt.plot(self.history.history["{}".format(key)])
plt.plot(self.history.history["val_{}".format(key)])
plt.legend(["{}".format(key), "val_{}".format(key)])
plt.xlabel("# Epochs")
plt.ylabel("{}".format(key))
if title:
plt.title("{}".format(title))
plt.savefig(plots_folder+"{}.png".format(key))
plt.close()
class InferenceModel:
def __init__(
self,
model_path: str,
custom_objects: dict = {
# "precision":precision,
# "recall":recall,
"f1":f1,
# "auroc":auroc,
# "fbeta":fbeta,
# "specificity":specificity,
# "negative_predictive_value":negative_predictive_value,
# "matthews_correlation_coefficient":matthews_correlation_coefficient,
# "equal_error_rate":equal_error_rate
}
) -> None:
self.model = tf.keras.models.load_model(
model_path,
custom_objects=custom_objects
)
def predict(self, X_test: np.array) -> np.array:
y_pred = self.model.predict(X_test)
return y_pred.flatten()
def evaluate(self, y_true: np.array, y_pred: np.array) -> None:
threshold_range = np.arange(0.1, 1.0, 0.001)
f1_scores = list()
for lambda_ in threshold_range:
f1_scores.append(f1_score(y_true, (y_pred > lambda_).astype(int)))
# print("Max f1: {:.4f}, at thres = {:.4f}".format(
# np.max(f1_scores), threshold_range[np.argmax(f1_scores)]
# ))
logging.info("Max f1: {:.4f}, at thres = {:.4f}".format(
np.max(f1_scores), threshold_range[np.argmax(f1_scores)]
))
fpr, tpr, thresholds = roc_curve(y_true, y_pred)
plt.plot([0, 1], [0, 1], linestyle="dashed")
plt.plot(fpr, tpr, marker="o")
plt.plot([0, 0, 1], [0, 1, 1], linestyle="dashed", c="red")
plt.legend([
"No Skill",
"ROC curve (area = {:.2f})".format(auc(fpr, tpr)),
"Perfect"
])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("ROC Curve")
plt.savefig(os.path.join(plots_folder,"roc_curve.png"))
plt.close()
precision, recall, thresholds = precision_recall_curve(y_true, y_pred)
plt.plot([0, 1], [0, 0], linestyle="dashed")
plt.plot(recall, precision, marker="o")
plt.legend([
"No Skill",
"Model"
])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Precision-recall Curve")
plt.savefig(os.path.join(plots_folder,"pc_curve.png"))
print(confusion_matrix(
y_true,
(y_pred > threshold_range[np.argmax(f1_scores)]).astype(int)
))
```
## facenet.py
```
import os
import cv2
import json
from tensorflow.keras import metrics
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from tensorflow.keras import Model
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.applications import resnet, mobilenet
from tensorflow_addons.layers import AdaptiveMaxPooling3D
model_info = "/content/drive/MyDrive/google_cv/flicker_detection/model_overview/"
class Facenet:
"""
adaptive pooling sample:
https://ideone.com/cJoN3x
"""
def __init__(self) -> None:
super().__init__()
self.__target_shape = (200, 200)
np.random.seed(0)
base_cnn = mobilenet.MobileNet(
weights="imagenet",
input_shape=self.__target_shape + (3,),
include_top=False
)
adaptive_1 = AdaptiveMaxPooling3D(
output_size=(6, 6, 1024))(base_cnn.output)
output = layers.Dense(256)(adaptive_1)
adaptive_m = AdaptiveMaxPooling3D(
output_size=(6, 6, 256))(output)
self.__embedding = Model(base_cnn.input, adaptive_m,name='Embedding')
# with open(os.path.join(model_info,'basecnn_summary.txt'), 'w') as fh:
# self.__embedding.summary(print_fn=lambda x: fh.write(x + '\n'))
for layer in base_cnn.layers[:-23]:
layer.trainable = False
anchor_input = layers.Input(
name="anchor", shape=self.__target_shape + (3,)
)
adapt_anchor = AdaptiveMaxPooling3D(
output_size=self.__target_shape + (3,))(anchor_input)
adapted_anchor = layers.Input(
name="adapted_anchor", shape=adapt_anchor.shape, tensor=adapt_anchor)
positive_input = layers.Input(
name="positive", shape=self.__target_shape + (3,)
)
adapt_positive = AdaptiveMaxPooling3D(
output_size=self.__target_shape + (3,))(positive_input)
adapted_positive = layers.Input(
name="adapted_positive", shape=adapt_positive.shape, tensor=adapt_positive)
negative_input = layers.Input(
name="negative", shape=self.__target_shape + (3,)
)
adapt_negative = AdaptiveMaxPooling3D(
output_size=self.__target_shape + (3,))(negative_input)
adapted_negative = layers.Input(
name="adapted_negative", shape=adapt_negative.shape, tensor=adapt_negative)
distances = DistanceLayer()(
self.__embedding(resnet.preprocess_input(anchor_input)),
self.__embedding(resnet.preprocess_input(positive_input)),
self.__embedding(resnet.preprocess_input(negative_input)),
)
siamese_network = Model(
inputs=[
adapted_anchor,
adapted_positive,
adapted_negative,
anchor_input,
positive_input,
negative_input,
],
outputs=distances
)
# with open(os.path.join(model_info,'resnet_preprocess_summary.txt'), 'w') as fh:
# siamese_network.summary(print_fn=lambda x: fh.write(x + '\n'))
adaptive_0 = AdaptiveMaxPooling3D(
output_size=(1024, 6, 6))(siamese_network.output)
adaptive_siamese_network = Model(siamese_network.input, adaptive_0)
self.__siamese_model = SiameseModel(adaptive_siamese_network)
self.__siamese_model.built = True
# with open(os.path.join(model_info,'adaptive_siamese_summary.txt'), 'w') as fh:
# self.__siamese_model.summary(print_fn=lambda x: fh.write(x + '\n'))
model_base_dir = os.path.join(model_info)
model_settings = json.load(
open(os.path.join(model_base_dir, "model.json"), "r")
)
model_path = os.path.join(model_base_dir, model_settings["name"])
if os.path.exists(model_path):
self.__siamese_model.load_weights(model_path)
else:
raise NotImplementedError
def get_embedding(self, images: np.ndarray, batched=True) -> np.ndarray:
assert (not batched) or len(
images.shape) == 4, "images should be an array of image with shape (width, height, 3)"
if not batched:
images = np.array([images, ])
resized_images = np.array([cv2.resize(image, dsize=self.__target_shape,
interpolation=cv2.INTER_CUBIC) for image in images])
image_tensor = tf.convert_to_tensor(resized_images, np.float32)
return self.__embedding(resnet.preprocess_input(image_tensor)).numpy()
class DistanceLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, anchor, positive, negative):
ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)
an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)
return (ap_distance, an_distance)
class SiameseModel(Model):
def __init__(self, siamese_network, margin=0.5):
super(SiameseModel, self).__init__()
self.siamese_network = siamese_network
self.margin = margin
self.loss_tracker = metrics.Mean(name="loss")
def call(self, inputs):
return self.siamese_network(inputs)
def train_step(self, data):
with tf.GradientTape() as tape:
loss = self._compute_loss(data)
gradients = tape.gradient(
loss, self.siamese_network.trainable_weights)
self.optimizer.apply_gradients(
zip(gradients, self.siamese_network.trainable_weights)
)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, data):
loss = self._compute_loss(data)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def _compute_loss(self, data):
ap_distance, an_distance = self.siamese_network(data)
loss = ap_distance - an_distance
loss = tf.maximum(loss + self.margin, 0.0)
return loss
@property
def metrics(self):
return [self.loss_tracker]
```
## Transformers
```
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, output_dim, **kwargs):
super().__init__(**kwargs)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim
)
self.sequence_length = sequence_length
self.output_dim = output_dim
def call(self, inputs):
# The inputs are of shape: `(batch_size, frames, num_features)`
length = tf.shape(inputs)[1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_positions = self.position_embeddings(positions)
return inputs + embedded_positions
def compute_mask(self, inputs, mask=None):
mask = tf.reduce_any(tf.cast(inputs, "bool"), axis=-1)
return mask
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim, dropout=0.3
)
self.dense_proj = tf.keras.Sequential(
[layers.Dense(dense_dim, activation=tf.nn.gelu), layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
if mask is not None:
mask = mask[:, tf.newaxis, :]
attention_output = self.attention(inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
def transformers(X_train: np.array)->Model:
sequence_length = 20
embed_dim = 9216
classes = 1
dense_dim = 4
num_heads = 1
inputs = tf.keras.Input(shape=X_train.shape[1:])
x = PositionalEmbedding(
sequence_length, embed_dim, name="frame_position_embedding"
)(inputs)
x = TransformerEncoder(embed_dim, dense_dim, num_heads, name="transformer_layer")(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(classes, activation="sigmoid")(x)
model = tf.keras.Model(inputs, outputs)
return model
```
## train.py
```
from typing import Tuple
from argparse import ArgumentParser
import torch
import tqdm
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from keras.models import Sequential
from keras.layers import LSTM, Dense, Flatten, Bidirectional
from keras.layers.convolutional import Conv1D
data_base_dir = "/content/drive/MyDrive/google_cv/flicker_detection/"
os.makedirs(data_base_dir, exist_ok=True)
cache_base_dir = "/content/drive/MyDrive/google_cv/flicker_detection/.cache/"
os.makedirs(cache_base_dir, exist_ok=True)
def _embed(
video_data_dir: str,
output_dir: str
) -> None:
os.makedirs(output_dir, exist_ok=True)
facenet = Facenet()
for path in tqdm.tqdm(os.listdir(video_data_dir)):
if os.path.exists(os.path.join(output_dir, "{}.npy".format(path))):
continue
vidcap = cv2.VideoCapture(os.path.join(video_data_dir, path))
success, image = vidcap.read()
embeddings = ()
while success:
embeddings = embeddings + tuple(facenet.get_embedding(cv2.resize(
image, (200, 200)), batched=False)[0].flatten())
success, image = vidcap.read()
embeddings = np.array(embeddings)
np.save(os.path.join(output_dir, path), embeddings)
def _get_chunk_array(input_arr: np.array, chunk_size: int) -> np.array:
usable_vec = input_arr[:(
np.floor(len(input_arr)/chunk_size)*chunk_size).astype(int)]
i_pad = np.concatenate((usable_vec, np.array(
[input_arr[-1]]*(chunk_size-len(usable_vec) % chunk_size))))
asymmetric_chunks = np.split(
i_pad,
list(range(
chunk_size,
input_arr.shape[0] + 1,
chunk_size
))
)
return tuple(asymmetric_chunks)
def _preprocess(
label_path: str,
mapping_path: str,
data_dir: str,
cache_path: str
) -> Tuple[np.array]:
"""
can consider reducing precision of np.float32 to np.float16 to reduce memory consumption
abstract:
https://towardsdatascience.com/overcoming-data-preprocessing-bottlenecks-with-tensorflow-data-service-nvidia-dali-and-other-d6321917f851
cuda solution:
https://stackoverflow.com/questions/60996756/how-do-i-assign-a-numpy-int64-to-a-torch-cuda-floattensor
static memory allocation solution:
https://pytorch.org/docs/stable/generated/torch.zeros.html
"""
if os.path.exists("{}.npz".format(cache_path)):
__cache__ = np.load("{}.npz".format(cache_path), allow_pickle=True)
return tuple((__cache__[k] for k in __cache__))
pass_videos = list([
"0096.mp4", "0097.mp4", "0098.mp4",
"0125.mp4", "0126.mp4", "0127.mp4",
"0145.mp4", "0146.mp4", "0147.mp4",
"0178.mp4", "0179.mp4", "0180.mp4"
])
raw_labels = json.load(open(label_path, "r"))
encoding_filename_mapping = json.load(open(mapping_path, "r"))
embedding_path_list = sorted([
x for x in os.listdir(data_dir)
if x.split(".npy")[0] not in pass_videos
and encoding_filename_mapping[x.replace(".npy", "")] in raw_labels
])
embedding_list_train, embedding_list_test, _, _ = train_test_split(
embedding_path_list,
list(range(len(embedding_path_list))),
test_size=0.1,
random_state=42
)
chunk_size = 30
video_embeddings_list_train = ()
video_labels_list_train = ()
# logging.debug(
# "taking training chunks, length = {}".format(len(embedding_list_train))
# )
for path in tqdm.tqdm(embedding_list_train):
real_filename = encoding_filename_mapping[path.replace(".npy", "")]
buf_embedding = np.load(os.path.join(data_dir, path))
if buf_embedding.shape[0] == 0:
continue
video_embeddings_list_train = video_embeddings_list_train + \
(*_get_chunk_array(buf_embedding, chunk_size),)
flicker_idxs = np.array(raw_labels[real_filename]) - 1
buf_label = np.zeros(buf_embedding.shape[0]).astype(
np.uint8) if buf_embedding.shape[0] > 0 else np.zeros(1859, dtype=int).tolist()
buf_label[flicker_idxs] = 1
video_labels_list_train = video_labels_list_train + tuple(
1 if sum(x) else 0
for x in _get_chunk_array(buf_label, chunk_size)
)
video_embeddings_list_test = ()
video_labels_list_test = ()
# logging.debug(
# "taking testing chunks, length = {}".format(len(embedding_list_test))
# )
for path in tqdm.tqdm(embedding_list_test):
real_filename = encoding_filename_mapping[path.replace(".npy", "")]
buf_embedding = np.load(os.path.join(data_dir, path))
if buf_embedding.shape[0] == 0:
continue
video_embeddings_list_test = video_embeddings_list_test + \
(*_get_chunk_array(buf_embedding, chunk_size),)
flicker_idxs = np.array(raw_labels[real_filename]) - 1
buf_label = np.zeros(buf_embedding.shape[0]).astype(np.uint8)
buf_label[flicker_idxs] = 1
video_labels_list_test = video_labels_list_test + tuple(
1 if sum(x) else 0
for x in _get_chunk_array(buf_label, chunk_size)
)
X_train = np.array(video_embeddings_list_train)
X_test = np.array(video_embeddings_list_test)
y_train = np.array(video_labels_list_train)
y_test = np.array(video_labels_list_test)
# logging.debug("ok. got training: {}/{}, testing: {}/{}".format(
# X_train.shape, y_train.shape,
# X_test.shape, y_test.shape
# ))
np.savez(cache_path, X_train, X_test, y_train, y_test)
return (X_train, X_test, y_train, y_test)
def _oversampling(
X_train: np.array,
y_train: np.array,
) -> Tuple[np.array]:
"""
batched alternative:
https://imbalanced-learn.org/stable/references/generated/imblearn.keras.BalancedBatchGenerator.html
"""
sm = SMOTE(random_state=42)
original_X_shape = X_train.shape
X_train, y_train = sm.fit_resample(
np.reshape(X_train, (-1, np.prod(original_X_shape[1:]))),
y_train
)
X_train = np.reshape(X_train, (-1,) + original_X_shape[1:])
return (X_train, y_train)
def _train(X_train: np.array, y_train: np.array) -> object:
buf = Sequential()
buf.add(Bidirectional(LSTM(units=256, activation='relu'),
input_shape=(X_train.shape[1:])))
buf.add(Dense(units=128, activation="relu"))
buf.add(Flatten())
buf.add(Dense(units=1, activation="sigmoid"))
model = Model(
model=transformers(X_train),#buf
loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5),
)
y_train = y_train.astype('float32').reshape((-1,1))
model.train(X_train, y_train, 1000, 0.1, 1024)
for k in ('loss','accuracy','f1'):#("loss","accuracy","precision","recall","f1","fbeta","specificity","negative_predictive_value","matthews_correlation_coefficient","equal_error_rate"):
model.plot_history(k, title="{} - LSTM, Chunk, Oversampling".format(k))
return model
def _test(model_path: str, X_test: np.array, y_test: np.array) -> None:
model = InferenceModel(model_path)
y_pred = model.predict(X_test)
model.evaluate(y_test, y_pred)
```
## Pipeline
```
import gc
gc.collect()
def pipeline()-> Tuple[np.array]:
# init_logger()
# logging.info("[Embedding] Start ...")
_embed(
os.path.join(data_base_dir, "Confidential_Videos"),
os.path.join(data_base_dir, "embedding")
)
# logging.info("[Embedding] done.")
# logging.info("[Preprocessing] Start ...")
X_train,X_test,y_train,y_test = _preprocess(
os.path.join(data_base_dir, "model_overview/label.json"),
os.path.join(data_base_dir, "model_overview/mapping.json"),
os.path.join(data_base_dir, "embedding"),
os.path.join(cache_base_dir, "train_test")
)
# logging.info("[Preprocessing] done.")
# def load_batches(cache_base_dir):
# buf = ()
# for path in os.listdir(os.path.join(cache_base_dir,".cache/")):
# buf = buf + (np.load(os.path.join(os.path.join(cache_base_dir,".cache/"),path)).tolist(),)
# return np.array(buf)
# X_train = load_batches(cache_base_dir)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = pipeline()
gc.collect()
```
## Oversampling
```
def oversampling(X_train,y_train):
# logging.info("[Oversampling] Start ...")
X_train, y_train = _oversampling(
X_train,
y_train
)
# logging.info("[Oversampling] done.")
return X_train,y_train
X_train,y_train = oversampling(X_train,y_train)
```
## Training
```
def training(X_train:np.array,y_train:np.array) -> None:
logging.info("[Training] Start ...")
model = _train(
X_train,
y_train
)
logging.info("[Training] done.")
return model
model = training(X_train,y_train)
"""
use gpu:
https://www.tensorflow.org/guide/gpu
"""
# vimdiff ~/googlecv/train.py /home/henrychao/googlecv/train.py
```
## Testing
```
def testing(X_test,y_test):
# logging.info("[Testing] Start ...")
_test("/content/drive/MyDrive/google_cv/flicker_detection/models/model1.h5", X_test, y_test)
# logging.info("[Testing] done.")
testing(X_test,y_test)
"""
Max f1: 0.6857, at thres = 0.6580
[[134 3]
[ 8 12]]
"""
```
| github_jupyter |
```
import os
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
import numpy as np
# from matplotlib import pyplot as plt
# from crawlab_toolbox import plotting as genplt
from sklearn.pipeline import Pipeline
import tensorflow.keras as keras
from scipy.stats import probplot
from scipy.stats import normaltest
import sys
# insert at 1, 0 is the script path (or '' in REPL)
sys.path.insert(1, '../dependencies/')
from plotting import *
import matplotlib as mpl
print(mpl.__version__)
inferenceLocations = ['Amazon-EC2','Desktop','Beaglebone']
inferenceLocationsBold = [r'\textbf{Amazon-EC2}',r'\textbf{Desktop}',r'\textbf{Beaglebone}']
basePath = 'data/Latency Tests/Results/'
numSamples = 700
numModels = 8
latencyVals = np.zeros((numSamples,numModels,len(inferenceLocations)))
scoreVals = np.zeros((numSamples,numModels,len(inferenceLocations)))
columns = None
for i in range(len(inferenceLocations)):
thisDF = pd.read_csv(basePath + inferenceLocations[i] + '_Loaded_Train_Healthy.csv')
print(thisDF.values.shape)
if thisDF.values.shape[1] > 8:
latencyVals[...,i] = thisDF.values[:,1:]
else:
latencyVals[...,i] = thisDF.values
thisDF = pd.read_csv(basePath + inferenceLocations[i] + '_Loaded_Train_Healthy_values.csv')
if thisDF.values.shape[1] > 8:
scoreVals[...,i] = thisDF.values[:,1:]
else:
scoreVals[...,i] = thisDF.values
if i == 0:
columns = thisDF.columns[1:]
latencyVals *= 1e3
print(np.mean(latencyVals[:,2,1]) - np.mean(latencyVals[:,3,1]))
print(np.amax(latencyVals[:,2,1]) - np.amax(latencyVals[:,3,1]))
print(np.mean(latencyVals[:,6,1]) - np.mean(latencyVals[:,7,1]))
print(np.amax(latencyVals[:,6,1]) - np.amax(latencyVals[:,7,1]))
print(np.mean(scoreVals[:,2,1]) - np.mean(scoreVals[:,3,1]))
print(np.amax(np.abs(scoreVals[:,2,1] - scoreVals[:,3,1])))
print(np.mean(scoreVals[:,6,1]) - np.mean(scoreVals[:,7,1]))
print(np.amax(np.abs(scoreVals[:,6,1] - scoreVals[:,7,1])))
```
mean latency, max latency, Max Score Difference, MSE
```
[colors[i] for i in np.arange(3)]
box_plot(scoreVals[:,np.array([2,3]),1],columns[np.array([2,3])],
ylabel='Mean Squared Error',
savefig=True,filename='CNN-AE_CompareScores_Boxplot',
template='publication')
box_plot(scoreVals[:,np.array([6,7]),1],columns[np.array([6,7])],
ylabel='Probabililty Healthy',
savefig=True,filename='CNN-MLP_CompareScores_Boxplot',
template='publication'
)
box_plot(latencyVals[:,np.array([2,3]),1],columns[np.array([2,3])],
savefig=True,filename='CNN-AE_CompareLatency_Boxplot',
template='publication')
box_plot(latencyVals[:,np.array([6,7]),1],columns[np.array([6,7])],
savefig=True,filename='CNN-MLP_CompareLatency_Boxplot',
template='publication')
cloudLatencyND = latencyVals[:,np.array([0,2,3]),0]
cloudLatencyCLF = latencyVals[:,np.array([4,6,7]),0]
desktopLatencyND = latencyVals[:,np.array([0,2,3]),1]
desktopLatencyCLF = latencyVals[:,np.array([4,6,7]),1]
cloudLatencyCLF = np.expand_dims(cloudLatencyCLF,axis=1)
cloudLatencyND = np.expand_dims(cloudLatencyND,axis=1)
desktopLatencyCLF = np.expand_dims(desktopLatencyCLF,axis=1)
desktopLatencyND = np.expand_dims(desktopLatencyND,axis=1)
cloudLatency = np.hstack((cloudLatencyCLF,cloudLatencyND))
desktopLatency = np.hstack((desktopLatencyCLF,desktopLatencyND))
cloudLatency.shape
box_plot_compare((cloudLatency),['SK-Learn','Tensorflow','TF-Lite'],savefig=True,filename='Amazon-CompareLatency',
template='presentation',xlabel='Model Type',color_order=np.zeros(6).astype(int),ylabel='Latency (ms)',
showfliers=False,legend_loc='upper left',max_cutoff=2,plot_type='box',
log_y=True,extension='svg',inferenceLocations=[r'\textbf{Classifier}',r'\textbf{Novelty Detector}'])
# box_plot_compare((desktopLatency),['SK-Learn','Tensorflow','TF-Lite'],savefig=True,filename='Desktop-CompareLatency',
# template='presentation',xlabel='Model Type',color_order=np.zeros(6).astype(int),ylabel='Latency (ms)',
# showfliers=False,legend_loc='upper left',max_cutoff=2,plot_type='box',
# log_y=True,extension='svg',inferenceLocations=[r'\textbf{Classifier}',r'\textbf{Novelty Detector}'])
bar_chart_compare((desktopLatency),['SK-Learn','Tensorflow','TF-Lite'],[r'\textbf{Classifier}',r'\textbf{Novelty Detector}'],savefig=True,filename='Desktop-CompareLatency',
template='presentation',xlabel='Sample Points',color_order=np.zeros(6).astype(int),ylabel='Mean Latency (ms)',
showfliers=False,legend_loc='upper left',
log_y=False,extension='svg',)
box_plot(latencyVals[:,:4,1],columns[:4],savefig=True,
filename='Desktop_Latency_Anomaly_Boxplot',template='Wide',log_y=True,
title='Novelty Detection'
)
print([np.mean(latencyVals[:,i,1]) for i in range(4)])
box_plot(latencyVals[:,4:,1],columns[4:],savefig=True,filename='Desktop_Latency_Classification_Boxplot',
template='wide',log_y=True,title='Classification')
box_plot(latencyVals[:,0,:],inferenceLocationsBold,savefig=True,filename='PCA-GMM_Boxplot',template='Presentation')
box_plot(latencyVals[:,3,:],inferenceLocationsBold,savefig=True,filename='CNN-AE-Lite_Boxplot',template='Presentation')
box_plot_compare((latencyVals[:,np.array([4,7]),:]),inferenceLocationsBold,savefig=True,filename='Classifier-CompareLatency',
template='presentation',xlabel='',color_order=np.zeros(6).astype(int),ylabel='Latency (ms)',
showfliers=False,legend_loc='upper left',max_cutoff=2,plot_type='box',
log_y=False,extension='svg',inferenceLocations=[r'\textbf{SK-Learn}',r'\textbf{TF-Lite}'])
bar_chart_compare((latencyVals[:,np.array([4,7]),:]),inferenceLocationsBold,[r'\textbf{SK-Learn}',r'\textbf{TF-Lite}'],savefig=True,filename='Classifier-CompareLatency',
template='presentation',xlabel='',color_order=np.zeros(6).astype(int),ylabel='Mean Latency (ms)',
showfliers=False,legend_loc='upper left',
log_y=False,extension='svg',)
bar_chart_compare((latencyVals[:,np.array([0,3]),:]),inferenceLocationsBold,[r'\textbf{SK-Learn}',r'\textbf{TF-Lite}'],savefig=True,filename='ND-CompareLatency',
template='presentation',xlabel='',color_order=np.zeros(6).astype(int),ylabel='Mean Latency (ms)',
showfliers=False,legend_loc='upper left',
log_y=False,extension='svg',)
```
| github_jupyter |
# Character based RNN language model
(c) Deniz Yuret, 2019. Based on http://karpathy.github.io/2015/05/21/rnn-effectiveness.
* Objectives: Learn to define and train a character based language model and generate text from it. Minibatch blocks of text. Keep a persistent RNN state between updates. Train a Shakespeare generator and a Julia programmer using the same type of model.
* Prerequisites: [RNN basics](60.rnn.ipynb), [Iterators](25.iterators.ipynb)
* New functions:
[converge](http://denizyuret.github.io/Knet.jl/latest/reference/#Knet.converge)
```
# Set display width, load packages, import symbols
ENV["COLUMNS"]=72
using Pkg; haskey(Pkg.installed(),"Knet") || Pkg.add("Knet")
using Statistics: mean
using Base.Iterators: cycle
using Knet: Knet, AutoGrad, Data, param, param0, mat, RNN, dropout, value, nll, adam, minibatch, progress!, converge
```
## Define the model
```
struct Embed; w; end
Embed(vocab::Int,embed::Int)=Embed(param(embed,vocab))
(e::Embed)(x) = e.w[:,x] # (B,T)->(X,B,T)->rnn->(H,B,T)
struct Linear; w; b; end
Linear(input::Int, output::Int)=Linear(param(output,input), param0(output))
(l::Linear)(x) = l.w * mat(x,dims=1) .+ l.b # (H,B,T)->(H,B*T)->(V,B*T)
# Let's define a chain of layers
struct Chain
layers
Chain(layers...) = new(layers)
end
(c::Chain)(x) = (for l in c.layers; x = l(x); end; x)
(c::Chain)(x,y) = nll(c(x),y)
(c::Chain)(d::Data) = mean(c(x,y) for (x,y) in d)
# The h=0,c=0 options to RNN enable a persistent state between iterations
CharLM(vocab::Int,embed::Int,hidden::Int; o...) =
Chain(Embed(vocab,embed), RNN(embed,hidden;h=0,c=0,o...), Linear(hidden,vocab))
```
## Train and test utilities
```
# For running experiments
function trainresults(file,maker,chars)
if (print("Train from scratch? "); readline()[1]=='y')
model = maker()
a = adam(model,cycle(dtrn))
b = (exp(model(dtst)) for _ in every(100,a))
c = converge(b, alpha=0.1)
progress!(c, alpha=1)
Knet.save(file,"model",model,"chars",chars)
else
isfile(file) || download("http://people.csail.mit.edu/deniz/models/tutorial/$file",file)
model,chars = Knet.load(file,"model","chars")
end
Knet.gc() # To save gpu memory
return model,chars
end
every(n,itr) = (x for (i,x) in enumerate(itr) if i%n == 0);
# To generate text from trained models
function generate(model,chars,n)
function sample(y)
p = Array(exp.(y)); r = rand()*sum(p)
for j=1:length(p); (r -= p[j]) < 0 && return j; end
end
x = 1
reset!(model)
for i=1:n
y = model([x])
x = sample(y)
print(chars[x])
end
println()
end
reset!(m::Chain)=(for r in m.layers; r isa RNN && (r.c=r.h=0); end);
```
## The Complete Works of William Shakespeare
```
RNNTYPE = :lstm
BATCHSIZE = 256
SEQLENGTH = 100
VOCABSIZE = 84
INPUTSIZE = 168
HIDDENSIZE = 334
NUMLAYERS = 1;
# Load 'The Complete Works of William Shakespeare'
include(Knet.dir("data","gutenberg.jl"))
trn,tst,shakechars = shakespeare()
map(summary,(trn,tst,shakechars))
# Print a sample
println(string(shakechars[trn[1020:1210]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# 3.30e+00 ┣ / / / / / ┫ 122 [04:46, 2.35s/i]
Knet.gc()
shakemaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
shakemodel,shakechars = trainresults("shakespeare113.jld2", shakemaker, shakechars);
#exp(shakemodel(dtst)) # Perplexity = 3.30
generate(shakemodel,shakechars,1000)
```
## Julia programmer
```
RNNTYPE = :lstm
BATCHSIZE = 64
SEQLENGTH = 64
INPUTSIZE = 512
VOCABSIZE = 128
HIDDENSIZE = 512
NUMLAYERS = 2;
# Read julia base library source code
base = joinpath(Sys.BINDIR, Base.DATAROOTDIR, "julia")
text = ""
for (root,dirs,files) in walkdir(base)
for f in files
f[end-2:end] == ".jl" || continue
text *= read(joinpath(root,f), String)
end
# println((root,length(files),all(f->contains(f,".jl"),files)))
end
length(text)
# Find unique chars, sort by frequency, assign integer ids.
charcnt = Dict{Char,Int}()
for c in text; charcnt[c]=1+get(charcnt,c,0); end
juliachars = sort(collect(keys(charcnt)), by=(x->charcnt[x]), rev=true)
charid = Dict{Char,Int}()
for i=1:length(juliachars); charid[juliachars[i]]=i; end
hcat(juliachars, map(c->charcnt[c],juliachars))
# Keep only VOCABSIZE most frequent chars, split into train and test
data = map(c->charid[c], collect(text))
data[data .> VOCABSIZE] .= VOCABSIZE
ntst = 1<<19
tst = data[1:ntst]
trn = data[1+ntst:end]
length.((data,trn,tst))
# Print a sample
r = rand(1:(length(trn)-1000))
println(string(juliachars[trn[r:r+1000]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# 3.25e+00 ┣ / / / / /┫ 126 [05:43, 2.72s/i]
juliamaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
juliamodel,juliachars = trainresults("juliacharlm113.jld2", juliamaker, juliachars);
#exp(juliamodel(dtst)) # Perplexity = 3.27
generate(juliamodel,juliachars,1000)
```
| github_jupyter |
# Numpy
[numpy的功能](https://www.numpy.org/devdocs/user/quickstart.html)
## 引入Numpy的目的
- 为何不直接用list?
- 慢
- 缺少数学操作
```
heights = [1.73, 1.68, 1.71, 1.89, 1.79] # 5人的身高
weights = [65.4, 59.2, 63.6, 88.4, 68.7] # 5人的体重
bmis = weights / heights ** 2 # 却不能直接计算5人的BMI
import numpy as np
np_heights = np.array(heights) # array,向量,通常只包含数值
np_weights = np.array(weights)
bmis = np_weights / np_heights ** 2 # 这里计算的单元是一个向量
bmis
```
##### 更新BMI公式,现在是对向量做计算
\begin{equation}\vec{BMIS} = \frac{\vec{weights}}{\vec{heights}^{2}}\end{equation}
```
bmis > 21 # 符合直觉的结果,
round(bmis) # 只有numpy.ndarray支持的方法
np.round(bmis) # 需要使用np重载的方法
np.max(bmis) == max(bmis) # 大胆地猜想这两种方法应该都可以
help(np.ndarray) # 文档即教程
```
## 关于Numpy类型
```
type(np_weights) # 应该返回numpy的类型
mixed_list = [1.0, "is", True] # 通用类型的list
type(np.array(mixed_list)) # 转化为numpy array
np.array(mixed_list) # 数据类型是字符串
```
##### 深入了解numpy数据类型
[dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html)
##### 快速检索开发文档
>具有本地索引的应用程序,Jupyter的help菜单速度不够快
- [Dash](https://kapeli.com/dash)
- [velocity](http://velocity.silverlakesoftware.com/)
## 数理统计和线性代数基础
##### 数理统计的例子
- 在大样本数据上计算身高体重指数
- 把前面5人的例子扩大1000倍
- 随机产生一个正态分布
> [正态分布生成](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html?highlight=random%20normal#numpy.random.normal)文档


##### 中国成年人身高,体重均值和标准差的参考值 (2015)
|性别|东北华北|西北|东南|华中|华南|西南|
|-|-|-|-|-|-|-|
|身高,体重|均值,标准差|均值,标准差|均值,标准差|均值,标准差|均值,标准差|均值,标准差|
|男(mm)|1693, 56.6|1684, 53.7|1686, 55,2|1669, 56.3|1650, 57.1|1647, 56.7|
|女(mm)|1586, 51.8|1575, 51.9|1575, 50.8|1560, 50.7|1549, 49.7|1546, 53.9|
|男(kg)|64, 8.2|60, 7.6|59, 7.7|57, 6.9|56, 6.9|55, 6.8|
|女(kg)|55, 7.1|52, 7.1|51, 7.2|50, 6.8|49, 6.5|50, 6.9|
> 选择东南地区成年男性数据:身高:1686, 55.2,体重:59, 7.7
```
height_5k = np.random.normal(1.686, 0.0552, 5000) # 5000身高数据
weight_5k = np.random.normal(59, 7.7, 5000) # 5000体重数据
shmale_5k = np.column_stack((height_5k, weight_5k)) # 5000上海男性数据
shmale_5k # 随机产生一些瘦子和胖子
shmale_weight = shmale_5k[:,1]
shmale_height = shmale_5k[:,0]
shmale_height_mean = np.mean(shmale_height) # 身高均值
shmale_height_std = np.std(shmale_height) # 身高标准差
shmale_weight_mean = np.mean(shmale_weight) # 体重均值
shmale_weight_std = np.std(shmale_weight) # 体重标准差
from tabulate import tabulate # 格式化成表格样式
print(tabulate([['身高(米)', shmale_height_mean, shmale_height_std],
['体重(公斤)', shmale_weight_mean, shmale_weight_std]],
headers=['上海男','均值', '标准差']))
shmale_bmi = shmale_weight / shmale_height ** 2 # 计算5000身高体重指数
shmale_bmi
print(np.mean(shmale_bmi), np.std(shmale_bmi)) # 上海男性体型分布
```
##### 线性代数例子
- 解方程组
\begin{equation}2x + 3y = 8 \end{equation}
\begin{equation}5x + 2y = 9 \end{equation}
- 用矩阵的形式表示为
$$ A = \begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \;\;\;\; \vec{b} = \begin{bmatrix}8\\9\end{bmatrix}$$
- 目标是求向量x
$$ A\vec{x}= \vec{b} $$
```
a = np.array([[2, 3], [5, 2]]) # 方程组系数矩阵
a.transpose() # 转置
b = np.array([8, 9])
print(b.shape, a.shape) # shape不是函数,是tuple
b.transpose()
# 矩阵的索引
print(tabulate([['0', a[0,0], a[0,1]],
['1', a[1,0], a[1,1]]],
headers=['A', '0', '1']))
```
>[矩阵运算](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)文档


```
# 矩阵求逆
from numpy.linalg import inv as inv
a_inv = inv(a)
a_inv
```
$$ A^{-1}A=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} $$
```
np.round(a_inv @ a) # 逆矩阵和原矩阵乘来验证求逆, @代表矩阵乘
```
$$ A^{-1}A\vec{x}=A^{-1}\vec{b}$$
$$ \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}\vec{x}= A^{-1}\vec{b}$$
$$ \vec{x} = A^{-1}\vec{b}$$
```
x = a_inv @ b
x
from numpy.linalg import solve as solve # 引入求解方法
solve(a, b)
```
$$ \vec{x} = \begin{bmatrix}1\\2\end{bmatrix} $$
$$ x=1 $$
$$ y=2 $$
| github_jupyter |
# Character level language model - Dinosaurus land
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
- How to store text data for processing using an RNN
- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
- How to build a character-level text generation recurrent neural network
- Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
```
import numpy as np
from utils import *
import random
```
## 1 - Problem Statement
### 1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
```
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
```
The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries.
```
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
```
### 1.2 - Overview of the model
Your model will have the following structure:
- Initialize parameters
- Run the optimization loop
- Forward propagation to compute the loss function
- Backward propagation to compute the gradients with respect to the loss function
- Clip the gradients to avoid exploding gradients
- Using the gradients, update your parameter with the gradient descent update rule.
- Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". </center></caption>
At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$.
## 2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
### 2.1 - Clipping the gradients in the optimization loop
In this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values.
In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. </center></caption>
**Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`.
```
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
```
** Expected output:**
<table>
<tr>
<td>
**gradients["dWaa"][1][2] **
</td>
<td>
10.0
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]**
</td>
<td>
-10.0
</td>
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td>
0.29713815361
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td>
[ 10.]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>
[ 8.45833407]
</td>
</tr>
</table>
### 2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>
**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:
- **Step 1**: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$
- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a `softmax()` function that you can use.
- **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).
Here is an example of how to use `np.random.choice()`:
```python
np.random.seed(0)
p = np.array([0.1, 0.0, 0.7, 0.2])
index = np.random.choice([0, 1, 2, 3], p = p.ravel())
```
This means that you will pick the `index` according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
- **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
```
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b)
z = np.dot(Wya, a) + by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)), p = y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
```
** Expected output:**
<table>
<tr>
<td>
**list of sampled indices:**
</td>
<td>
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>
7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
</td>
</tr><tr>
<td>
**list of sampled characters:**
</td>
<td>
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>
'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>
'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
</td>
</tr>
</table>
## 3 - Building the language model
It is time to build the character-level language model for text generation.
### 3.1 - Gradient descent
In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:
- Forward propagate through the RNN to compute the loss
- Backward propagate through time to compute the gradients of the loss with respect to the parameters
- Clip the gradients if necessary
- Update your parameters using gradient descent
**Exercise**: Implement this optimization process (one step of stochastic gradient descent).
We provide you with the following functions:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in the backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
```
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
```
** Expected output:**
<table>
<tr>
<td>
**Loss **
</td>
<td>
126.503975722
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]**
</td>
<td>
0.194709315347
</td>
<tr>
<td>
**np.argmax(gradients["dWax"])**
</td>
<td> 93
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td> -0.007773876032
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td> [-0.06809825]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>[ 0.01538192]
</td>
</tr>
<tr>
<td>
**a_last[4]**
</td>
<td> [-1.]
</td>
</tr>
</table>
### 3.2 - Training the model
Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:
```python
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
```
Note that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`).
The first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
```
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
```
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
```
parameters = model(data, ix_to_char, char_to_ix)
```
## Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
## 4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
```
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
```
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt).
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
```
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
```
The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:
- LSTMs instead of the basic RNN to capture longer-range dependencies
- The model is a deeper, stacked LSTM model (2 layer)
- Using Keras instead of python to simplify the code
If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.
Congratulations on finishing this notebook!
**References**:
- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).
- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py
| github_jupyter |
# SETTINGS
This notebooks imports the processed data `orders.csv` and `items.csv` generated in `notebook_01_data_prep.ipynb`.
The notebook performs the following operations:
- creating new item and order-related features
- transforming order data in the format suitable for modeling
- exporting the resulting labeled and unlabeled data as `df.csv` and `df_test.csv`.
The created features include:
- fine-grained item categories
- average customer ratings per category and manufacturer
- mean prices per category and manufacturer
- count and total amount of orders over previous time periods
- number of promotion days
- days since the last order
- automatically extracted time series features using `tsfresh`
A detailed walkthrough of the code covering the key steps is provided in [this blog post](https://kozodoi.me/python/time%20series/demand%20forecasting/competitions/2020/07/27/demand-forecasting.html).
```
##### LIBRARIES
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats
from scipy.signal import find_peaks
import os
import time
import datetime
import random
import multiprocessing
import pickle
import warnings
import gc
from tqdm import tqdm
import sys
from tsfresh import extract_features
##### MODULES
sys.path.append('../codes')
from versioning import save_csv_version
##### SETTINGS
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
plt.style.use('dark_background')
%matplotlib inline
gc.enable()
```
# DATA IMPORT
```
# read data
orders = pd.read_csv('../data/prepared/orders_v1.csv', compression = 'gzip')
items = pd.read_csv('../data/prepared/items_v1.csv', compression = 'gzip')
print(orders.shape)
print(items.shape)
# convert dates
orders['time'] = pd.to_datetime(orders['time'].astype('str'), infer_datetime_format = True)
items['promotion_0'] = pd.to_datetime(items['promotion_0'].astype('str'), infer_datetime_format = True)
items['promotion_1'] = pd.to_datetime(items['promotion_1'].astype('str'), infer_datetime_format = True)
items['promotion_2'] = pd.to_datetime(items['promotion_2'].astype('str'), infer_datetime_format = True)
```
# ADD FEATURES: ITEMS
```
items.head()
# price ratio
items['recommended_simulation_price_ratio'] = items['simulationPrice'] / items['recommendedRetailPrice']
items['recommended_simulation_price_ratio'].describe()
# detailed item category
items['category'] = items['category1'].astype(str) + items['category2'].astype(str) + items['category3'].astype(str)
items['category'] = items['category'].astype(int)
items['category'].nunique()
# customer rating ratio per manufacturer
rating_manufacturer = items.groupby('manufacturer')['customerRating'].agg('mean').reset_index()
rating_manufacturer.columns = ['manufacturer', 'mean_customerRating_manufacturer']
items = items.merge(rating_manufacturer, how = 'left', on = 'manufacturer')
items['customerRating_manufacturer_ratio'] = items['customerRating'] / items['mean_customerRating_manufacturer']
del items['mean_customerRating_manufacturer']
items['customerRating_manufacturer_ratio'].describe()
# customer rating ratio per category
rating_category = items.groupby('category')['customerRating'].agg('mean').reset_index()
rating_category.columns = ['category', 'mean_customerRating_category']
items = items.merge(rating_category, how = 'left', on = 'category')
items['customerRating_category_ratio'] = items['customerRating'] / items['mean_customerRating_category']
del items['mean_customerRating_category']
items['customerRating_category_ratio'].describe()
```
# ADD FEATURES: ORDERS
```
# total orders
print(orders['order'].sum())
##### AGGREGATE ORDERS BY DAY
# aggregation
orders['day_of_year'] = orders['time'].dt.dayofyear
orders_price = orders.groupby(['itemID', 'day_of_year'])['salesPrice'].agg('mean').reset_index()
orders = orders.groupby(['itemID', 'day_of_year'])['order'].agg('sum').reset_index()
orders.head()
# check total orders
print(orders['order'].sum())
##### ADD MISSING INPUTS [ORDERS]
# add items that were never sold before
missing_itemIDs = set(items['itemID'].unique()) - set(orders['itemID'].unique())
missing_rows = pd.DataFrame({'itemID': list(missing_itemIDs),
'day_of_year': np.ones(len(missing_itemIDs)).astype('int'),
'order': np.zeros(len(missing_itemIDs)).astype('int')})
orders = pd.concat([orders, missing_rows], axis = 0)
print(orders.shape)
# add zeros for days with no transactions
agg_orders = orders.groupby(['itemID', 'day_of_year']).order.unique().unstack('day_of_year').stack('day_of_year', dropna = False)
agg_orders = agg_orders.reset_index()
agg_orders.columns = ['itemID', 'day_of_year', 'order']
agg_orders['order'].fillna(0, inplace = True)
agg_orders['order'] = agg_orders['order'].astype(int)
print(agg_orders.shape)
##### ADD MISSING INPUTS [PRICES]
# add items that were never sold before
missing_rows = pd.DataFrame({'itemID': list(missing_itemIDs),
'day_of_year': np.ones(len(missing_itemIDs)).astype('int'),
'salesPrice': np.zeros(len(missing_itemIDs)).astype('int')})
orders_price = pd.concat([orders_price, missing_rows], axis = 0)
print(orders_price.shape)
# add zeros for days with no transactions
agg_orders_price = orders_price.groupby(['itemID', 'day_of_year']).salesPrice.unique().unstack('day_of_year').stack('day_of_year', dropna = False)
agg_orders_price = agg_orders_price.reset_index()
agg_orders_price.columns = ['itemID', 'day_of_year', 'salesPrice']
agg_orders_price['salesPrice'].fillna(0, inplace = True)
agg_orders_price['salesPrice'] = agg_orders_price['salesPrice'].astype(int)
agg_orders_price['salesPrice'][agg_orders_price['salesPrice'] == 0] = np.nan
print(agg_orders_price.shape)
# fill missing prices for dates with no orders
agg_orders_price['salesPrice'] = agg_orders_price.groupby(['itemID']).salesPrice.fillna(method = 'ffill')
agg_orders_price['salesPrice'] = agg_orders_price.groupby(['itemID']).salesPrice.fillna(method = 'bfill')
agg_orders_price = agg_orders_price.merge(items[['itemID', 'simulationPrice']], how = 'left', on = 'itemID')
agg_orders_price['salesPrice'][agg_orders_price['salesPrice'].isnull()] = agg_orders_price['simulationPrice'][agg_orders_price['salesPrice'].isnull()]
del agg_orders_price['simulationPrice']
# merge prices to orders
agg_orders = agg_orders.merge(agg_orders_price, how = 'left', on = ['itemID', 'day_of_year'])
print(agg_orders.shape)
##### FIND PROMOTION DAYS
# documentation says that we have to manually mark promotion days in the training period
# without marking, predictions are difficult because orders in some days explode without apparent reason
# we need to be very careful with threshold and rely on visual analysis and/or better metric to mark promos
# I started with treating promotions as days when the number of orders for a given item exceeds 90th percentile
# now I am trying to use a smarter technique using find_peaks() and specifying height and prominence
# computations
agg_orders['promotion'] = 0
for itemID in tqdm(agg_orders['itemID'].unique()):
'''
# using quantiles
promo_quant = orders[orders['itemID'] == itemID]['order'].quantile(0.90)
orders.loc[(orders['itemID'] == itemID) & (orders['order'] >= promo_quant), 'promotion'] = 1
'''
# using find_peaks
promo = np.zeros(len(agg_orders[agg_orders['itemID'] == itemID]))
avg = agg_orders[(agg_orders['itemID'] == itemID)]['order'].median()
std = agg_orders[(agg_orders['itemID'] == itemID)]['order'].std()
peaks, _ = find_peaks(np.append(agg_orders[agg_orders['itemID'] == itemID]['order'].values, avg), # append avg to enable marking last point as promo
prominence = max(5, std), # peak difference with neighbor points; max(5,std) to exclude cases when std is too small
height = avg + 2*std) # minimal height of a peak
promo[peaks] = 1
agg_orders.loc[agg_orders['itemID'] == itemID, 'promotion'] = promo
# check total promotions
print(agg_orders['promotion'].sum())
##### COMPARE PROMOTIONS NUMBER
# computations
promo_in_train = (agg_orders['promotion'].sum() / agg_orders['day_of_year'].max()) / len(items)
promo_in_test = (3*len(items) - items.promotion_0.isnull().sum() - items.promotion_2.isnull().sum() - items.promotion_1.isnull().sum()) / 14 / len(items)
# info
print('Daily p(promotion) per item in train: {}'.format(np.round(promo_in_train, 4)))
print('Daily p(promotion) per item in test: {}'.format(np.round(promo_in_test , 4)))
##### EXAMPLE SALES PLOT
# compute promo count
promo_count = agg_orders.groupby('itemID')['promotion'].agg('sum').reset_index()
promo_count = promo_count.sort_values('promotion').reset_index(drop = True)
# plot some items
item_plots = [0, 2000, 4000, 6000, 8000, 9000, 10000, 10100, 10200, 10300, 10400, 10462]
fig = plt.figure(figsize = (16, 12))
for i in range(len(item_plots)):
plt.subplot(3, 4, i + 1)
df = agg_orders[agg_orders.itemID == promo_count['itemID'][item_plots[i]]]
plt.scatter(df['day_of_year'], df['order'], c = df['promotion'])
plt.ylabel('Total Orders')
plt.xlabel('Day')
##### COMPUTING TARGETS AND FEATURES
# parameters
days_input = [1, 7, 14, 21, 28, 35]
days_target = 14
# preparations
day_first = np.max(days_input)
day_last = agg_orders['day_of_year'].max() - days_target + 1
orders = None
# merge manufacturer and category
agg_orders = agg_orders.merge(items[['itemID', 'manufacturer']], how = 'left')
agg_orders = agg_orders.merge(items[['itemID', 'category']], how = 'left')
# computations
for day_of_year in tqdm(list(range(149, day_last)) + [agg_orders['day_of_year'].max()]):
### VALIDAION: TARGET, PROMOTIONS, PRICES
# day intervals
target_day_min = day_of_year + 1
target_day_max = day_of_year + days_target
# compute target and promo: labeled data
if day_of_year < agg_orders['day_of_year'].max():
# target and future promo
tmp_df = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('itemID')['order', 'promotion'].agg('sum').reset_index()
tmp_df.columns = ['itemID', 'target', 'promo_in_test']
# future price
tmp_df['mean_price_test'] = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('itemID')['salesPrice'].agg('mean').reset_index()['salesPrice']
# merge manufacturer and category
tmp_df = tmp_df.merge(items[['itemID', 'manufacturer', 'category']], how = 'left', on = 'itemID')
# future price per manufacturer
tmp_df_manufacturer = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('manufacturer')['salesPrice'].agg('mean').reset_index()
tmp_df_manufacturer.columns = ['manufacturer', 'mean_price_test_manufacturer']
tmp_df = tmp_df.merge(tmp_df_manufacturer, how = 'left', on = 'manufacturer')
# future price per category
tmp_df_category = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('category')['salesPrice'].agg('mean').reset_index()
tmp_df_category.columns = ['category', 'mean_price_test_category']
tmp_df = tmp_df.merge(tmp_df_category, how = 'left', on = 'category')
# future promo per manufacturer
tmp_df_manufacturer = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('manufacturer')['promotion'].agg('sum').reset_index()
tmp_df_manufacturer.columns = ['manufacturer', 'promo_in_test_manufacturer']
tmp_df = tmp_df.merge(tmp_df_manufacturer, how = 'left', on = 'manufacturer')
# future promo per category
tmp_df_category = agg_orders[(agg_orders['day_of_year'] >= target_day_min) &
(agg_orders['day_of_year'] <= target_day_max)
].groupby('category')['promotion'].agg('sum').reset_index()
tmp_df_category.columns = ['category', 'promo_in_test_category']
tmp_df = tmp_df.merge(tmp_df_category, how = 'left', on = 'category')
# compute target and promo: unlabeled data
else:
# placeholders
tmp_df = pd.DataFrame({'itemID': items.itemID,
'target': np.nan,
'promo_in_test': np.nan,
'mean_price_test': items.simulationPrice,
'manufacturer': items.manufacturer,
'category': items.category,
'promo_in_test_manufacturer': np.nan,
'promo_in_test_category': np.nan})
### TRAINING: LAG-BASED FEATURES
# compute features
for day_input in days_input:
# day intervals
input_day_min = day_of_year - day_input + 1
input_day_max = day_of_year
# frequency, promo and price
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max)
].groupby('itemID')
tmp_df['order_sum_last_' + str(day_input)] = tmp_df_input['order'].agg('sum').reset_index()['order']
tmp_df['order_count_last_' + str(day_input)] = tmp_df_input['order'].agg(lambda x: len(x[x > 0])).reset_index()['order']
tmp_df['promo_count_last_' + str(day_input)] = tmp_df_input['promotion'].agg('sum').reset_index()['promotion']
tmp_df['mean_price_last_' + str(day_input)] = tmp_df_input['salesPrice'].agg('mean').reset_index()['salesPrice']
# frequency, promo per manufacturer
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max)
].groupby('manufacturer')
tmp_df_manufacturer = tmp_df_input['order'].agg('sum').reset_index()
tmp_df_manufacturer.columns = ['manufacturer', 'order_manufacturer_sum_last_' + str(day_input)]
tmp_df_manufacturer['order_manufacturer_count_last_' + str(day_input)] = tmp_df_input['order'].agg(lambda x: len(x[x > 0])).reset_index()['order']
tmp_df_manufacturer['promo_manufacturer_count_last_' + str(day_input)] = tmp_df_input['promotion'].agg('sum').reset_index()['promotion']
tmp_df = tmp_df.merge(tmp_df_manufacturer, how = 'left', on = 'manufacturer')
# frequency, promo per category
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max)
].groupby('category')
tmp_df_category = tmp_df_input['order'].agg('sum').reset_index()
tmp_df_category.columns = ['category', 'order_category_sum_last_' + str(day_input)]
tmp_df_category['order_category_count_last_' + str(day_input)] = tmp_df_input['order'].agg(lambda x: len(x[x > 0])).reset_index()['order']
tmp_df_category['promo_category_count_last_' + str(day_input)] = tmp_df_input['promotion'].agg('sum').reset_index()['promotion']
tmp_df = tmp_df.merge(tmp_df_category, how = 'left', on = 'category')
# frequency, promo per all items
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max)]
tmp_df['order_all_sum_last_' + str(day_input)] = tmp_df_input['order'].agg('sum')
tmp_df['order_all_count_last_' + str(day_input)] = tmp_df_input['order'].agg(lambda x: len(x[x > 0]))
tmp_df['promo_all_count_last_' + str(day_input)] = tmp_df_input['promotion'].agg('sum')
# recency
if day_input == max(days_input):
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max) &
(agg_orders['order'] > 0)
].groupby('itemID')
tmp_df['days_since_last_order'] = (day_of_year - tmp_df_input['day_of_year'].agg('max')).reindex(tmp_df.itemID).reset_index()['day_of_year']
tmp_df['days_since_last_order'].fillna(day_input, inplace = True)
# tsfresh features
if day_input == max(days_input):
tmp_df_input = agg_orders[(agg_orders['day_of_year'] >= input_day_min) &
(agg_orders['day_of_year'] <= input_day_max)]
tmp_df_input = tmp_df_input[['day_of_year', 'itemID', 'order']]
extracted_features = extract_features(tmp_df_input, column_id = 'itemID', column_sort = 'day_of_year')
extracted_features['itemID'] = extracted_features.index
tmp_df = tmp_df.merge(extracted_features, how = 'left', on = 'itemID')
### FINAL PREPARATIONS
# add day of year
tmp_df.insert(1, column = 'day_of_year', value = day_of_year)
# merge data
orders = pd.concat([orders, tmp_df], axis = 0)
# drop manufacturer and category
del orders['manufacturer']
del orders['category']
##### REMOVE MISSINGS
good_nas = ['target',
'mean_price_test_category', 'mean_price_test_manufacturer',
'promo_in_test', 'promo_in_test_category', 'promo_in_test_manufacturer']
nonas = list(orders.columns[orders.isnull().sum() == 0]) + good_nas
orders = orders[nonas]
print(orders.shape)
##### COMPUTE MEAN PRICE RATIOS
print(orders.shape)
price_vars = ['mean_price_last_1', 'mean_price_last_7', 'mean_price_last_14',
'mean_price_last_21', 'mean_price_last_28', 'mean_price_last_35']
for var in price_vars:
orders['ratio_' + str(var)] = orders['mean_price_test'] / orders[var]
orders['ratio_manufacturer_' + str(var)] = orders['mean_price_test_manufacturer'] / orders[var]
orders['ratio_category_' + str(var)] = orders['mean_price_test_category'] / orders[var]
print(orders.shape)
##### EXAMPLE SALES PLOT
df = orders[orders.itemID == 1]
plt.figure(figsize = (10, 5))
plt.scatter(df['day_of_year'], df['target'], c = df['promo_in_test'])
plt.title('itemID == 1')
plt.ylabel('Target (orders in next 14 days)')
plt.xlabel('Day')
```
# MERGE DATA SETS
```
print(orders.shape)
print(items.shape)
df = pd.merge(orders, items, on = 'itemID', how = 'left')
print(df.shape)
```
# EXTRACT TEST DATA
```
# partition intro train and test
df_train = df[df['day_of_year'] < df['day_of_year'].max()]
df_test = df[df['day_of_year'] == df['day_of_year'].max()]
print(df_train.shape)
print(df_test.shape)
##### COMPUTE FEATURES FOR TEST DATA
# add promotion info to test
promo_vars = df_test.filter(like = 'promotion_').columns
df_test['promo_in_test'] = 3 - df_test[promo_vars].isnull().sum(axis = 1)
df_test['promo_in_test'].describe()
### PROMO PER MANUFACTURER, CATEGORY
del df_test['promo_in_test_manufacturer'], df_test['promo_in_test_category']
# future promo per manufacturer
tmp_df_manufacturer = df_test.groupby('manufacturer')['promo_in_test'].agg('sum').reset_index()
tmp_df_manufacturer.columns = ['manufacturer', 'promo_in_test_manufacturer']
df_test = df_test.merge(tmp_df_manufacturer, how = 'left', on = 'manufacturer')
print(df_test.shape)
# future promo per category
tmp_df_category = df_test.groupby('category')['promo_in_test'].agg('sum').reset_index()
tmp_df_category.columns = ['category', 'promo_in_test_category']
df_test = df_test.merge(tmp_df_category, how = 'left', on = 'category')
print(df_test.shape)
### PRICE PER MANUFACTURER, CATEGORY
del df_test['mean_price_test_manufacturer'], df_test['mean_price_test_category']
# future price per manufacturer
tmp_df_manufacturer = df_test.groupby('manufacturer')['mean_price_test'].agg('mean').reset_index()
tmp_df_manufacturer.columns = ['manufacturer', 'mean_price_test_manufacturer']
df_test = df_test.merge(tmp_df_manufacturer, how = 'left', on = 'manufacturer')
print(df_test.shape)
# future price per category
tmp_df_category = df_test.groupby('category')['mean_price_test'].agg('mean').reset_index()
tmp_df_category.columns = ['category', 'mean_price_test_category']
df_test = df_test.merge(tmp_df_category, how = 'left', on = 'category')
print(df_test.shape)
### MEAN PRICE RATIOS
for var in price_vars:
df_test['ratio_' + str(var)] = df_test['mean_price_test'] / df_test[var]
df_test['ratio_manufacturer_' + str(var)] = df_test['mean_price_test_manufacturer'] / df_test[var]
df_test['ratio_category_' + str(var)] = df_test['mean_price_test_category'] / df_test[var]
print(df_test.shape)
# drop promotion dates
df_test.drop(promo_vars, axis = 1, inplace = True)
df_train.drop(promo_vars, axis = 1, inplace = True)
print(df_train.shape)
print(df_test.shape)
# drop mean prices
price_vars = price_vars + ['mean_price_test_manufacturer', 'mean_price_test_category']
df_test.drop(price_vars, axis = 1, inplace = True)
df_train.drop(price_vars, axis = 1, inplace = True)
print(df_train.shape)
print(df_test.shape)
```
# EXPORT
```
# save data frame
# save_csv_version() automatically adds version number to prevent overwriting
save_csv_version('../data/prepared/df.csv', df_train, index = False, compression = 'gzip')
save_csv_version('../data/prepared/df_test.csv', df_test, index = False, compression = 'gzip', min_version = 3)
print(df_train.shape)
print(df_test.shape)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/overfit-ir/persian-twitter-ner/blob/master/benchmark.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Setup
```
!pip -q install transformers==4.2.2
!pip -q install sentencepiece
!pip -q install nereval
import transformers
transformers.__version__
import pandas as pd
import numpy as np
from transformers import (
pipeline,
AutoConfig,
AutoTokenizer,
AutoModel,
AutoModelForTokenClassification
)
from pprint import pprint
! rm -rf data
! wget -q --show-progress https://raw.githubusercontent.com/overfit-ir/persian-twitter-ner/master/twitter_data/persian-ner-twitter-data/test.txt
! wget -q --show-progress https://raw.githubusercontent.com/overfit-ir/persian-twitter-ner/master/twitter_data/persian-ner-twitter-data/train.txt
! mkdir data && mv test.txt data/ && mv train.txt data/
! rm -rf data_peyma
! wget -q --show-progress https://raw.githubusercontent.com/overfit-ir/persian-twitter-ner/master/ner_data/peyma/test.txt
! wget -q --show-progress https://raw.githubusercontent.com/overfit-ir/persian-twitter-ner/master/ner_data/peyma/train.txt
! mkdir data_peyma && mv test.txt data_peyma/ && mv train.txt data_peyma/
```
# Convert to Text
```
from pathlib import Path
import re
def convert_lines_to_text(file_path, separator='\t'):
file_path = Path(file_path)
raw_text = file_path.read_text().strip()
raw_docs = re.split(r'\n\t?\n', raw_text)
token_docs = []
tag_docs = []
for doc in raw_docs:
tokens = []
tags = []
for line in doc.split('\n'):
token, tag = line.split(separator, 1)
tokens.append(token)
tags.append(tag)
token_docs.append(tokens)
tag_docs.append(tags)
return token_docs, tag_docs
texts, tags = convert_lines_to_text('data/test.txt')
texts_train, tags_train = convert_lines_to_text('data/train.txt')
s = ''
for word in texts[8]:
s += word + ' '
print(s)
texts_peyma, tags_peyma = convert_lines_to_text('data_peyma/test.txt', separator='|')
texts_peyma_train, tags_peyma_train = convert_lines_to_text('data_peyma/test.txt', separator='|')
s = ''
for word in texts_peyma[0]:
s += word + ' '
print(s)
```
# Benchmark
```
import logging
from collections import namedtuple
from copy import deepcopy
logging.basicConfig(
format="%(asctime)s %(name)s %(levelname)s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level="DEBUG",
)
Entity = namedtuple("Entity", "e_type start_offset end_offset")
class Evaluator():
def __init__(self, true, pred, tags):
"""
"""
if len(true) != len(pred):
raise ValueError("Number of predicted documents does not equal true")
self.true = true
self.pred = pred
self.tags = tags
# Setup dict into which metrics will be stored.
self.metrics_results = {
'correct': 0,
'incorrect': 0,
'partial': 0,
'missed': 0,
'spurious': 0,
'possible': 0,
'actual': 0,
'precision': 0,
'recall': 0,
}
# Copy results dict to cover the four schemes.
self.results = {
'strict': deepcopy(self.metrics_results),
'ent_type': deepcopy(self.metrics_results),
'partial':deepcopy(self.metrics_results),
'exact':deepcopy(self.metrics_results),
}
# Create an accumulator to store results
self.evaluation_agg_entities_type = {e: deepcopy(self.results) for e in tags}
def evaluate(self):
logging.info(
"Imported %s predictions for %s true examples",
len(self.pred), len(self.true)
)
for true_ents, pred_ents in zip(self.true, self.pred):
# Check that the length of the true and predicted examples are the
# same. This must be checked here, because another error may not
# be thrown if the lengths do not match.
if len(true_ents) != len(pred_ents):
raise ValueError("Prediction length does not match true example length")
# Compute results for one message
tmp_results, tmp_agg_results = compute_metrics(
collect_named_entities(true_ents),
collect_named_entities(pred_ents),
self.tags
)
# Cycle through each result and accumulate
# TODO: Combine these loops below:
for eval_schema in self.results:
for metric in self.results[eval_schema]:
self.results[eval_schema][metric] += tmp_results[eval_schema][metric]
# Calculate global precision and recall
self.results = compute_precision_recall_wrapper(self.results)
# Aggregate results by entity type
for e_type in self.tags:
for eval_schema in tmp_agg_results[e_type]:
for metric in tmp_agg_results[e_type][eval_schema]:
self.evaluation_agg_entities_type[e_type][eval_schema][metric] += tmp_agg_results[e_type][eval_schema][metric]
# Calculate precision recall at the individual entity level
self.evaluation_agg_entities_type[e_type] = compute_precision_recall_wrapper(self.evaluation_agg_entities_type[e_type])
return self.results, self.evaluation_agg_entities_type
def collect_named_entities(tokens):
"""
Creates a list of Entity named-tuples, storing the entity type and the start and end
offsets of the entity.
:param tokens: a list of tags
:return: a list of Entity named-tuples
"""
named_entities = []
start_offset = None
end_offset = None
ent_type = None
for offset, token_tag in enumerate(tokens):
if token_tag == 'O':
if ent_type is not None and start_offset is not None:
end_offset = offset - 1
named_entities.append(Entity(ent_type, start_offset, end_offset))
start_offset = None
end_offset = None
ent_type = None
elif ent_type is None:
ent_type = token_tag[2:]
start_offset = offset
elif ent_type != token_tag[2:] or (ent_type == token_tag[2:] and token_tag[:1] == 'B'):
end_offset = offset - 1
named_entities.append(Entity(ent_type, start_offset, end_offset))
# start of a new entity
ent_type = token_tag[2:]
start_offset = offset
end_offset = None
# catches an entity that goes up until the last token
if ent_type is not None and start_offset is not None and end_offset is None:
named_entities.append(Entity(ent_type, start_offset, len(tokens)-1))
return named_entities
def compute_metrics(true_named_entities, pred_named_entities, tags):
eval_metrics = {'correct': 0, 'incorrect': 0, 'partial': 0, 'missed': 0, 'spurious': 0, 'precision': 0, 'recall': 0}
# overall results
evaluation = {
'strict': deepcopy(eval_metrics),
'ent_type': deepcopy(eval_metrics),
'partial': deepcopy(eval_metrics),
'exact': deepcopy(eval_metrics)
}
# results by entity type
evaluation_agg_entities_type = {e: deepcopy(evaluation) for e in tags}
# keep track of entities that overlapped
true_which_overlapped_with_pred = []
# Subset into only the tags that we are interested in.
# NOTE: we remove the tags we don't want from both the predicted and the
# true entities. This covers the two cases where mismatches can occur:
#
# 1) Where the model predicts a tag that is not present in the true data
# 2) Where there is a tag in the true data that the model is not capable of
# predicting.
true_named_entities = [ent for ent in true_named_entities if ent.e_type in tags]
pred_named_entities = [ent for ent in pred_named_entities if ent.e_type in tags]
# go through each predicted named-entity
for pred in pred_named_entities:
found_overlap = False
# Check each of the potential scenarios in turn. See
# http://www.davidsbatista.net/blog/2018/05/09/Named_Entity_Evaluation/
# for scenario explanation.
# Scenario I: Exact match between true and pred
if pred in true_named_entities:
true_which_overlapped_with_pred.append(pred)
evaluation['strict']['correct'] += 1
evaluation['ent_type']['correct'] += 1
evaluation['exact']['correct'] += 1
evaluation['partial']['correct'] += 1
# for the agg. by e_type results
evaluation_agg_entities_type[pred.e_type]['strict']['correct'] += 1
evaluation_agg_entities_type[pred.e_type]['ent_type']['correct'] += 1
evaluation_agg_entities_type[pred.e_type]['exact']['correct'] += 1
evaluation_agg_entities_type[pred.e_type]['partial']['correct'] += 1
else:
# check for overlaps with any of the true entities
for true in true_named_entities:
pred_range = range(pred.start_offset, pred.end_offset)
true_range = range(true.start_offset, true.end_offset)
# Scenario IV: Offsets match, but entity type is wrong
if true.start_offset == pred.start_offset and pred.end_offset == true.end_offset \
and true.e_type != pred.e_type:
# overall results
evaluation['strict']['incorrect'] += 1
evaluation['ent_type']['incorrect'] += 1
evaluation['partial']['correct'] += 1
evaluation['exact']['correct'] += 1
# aggregated by entity type results
evaluation_agg_entities_type[true.e_type]['strict']['incorrect'] += 1
evaluation_agg_entities_type[true.e_type]['ent_type']['incorrect'] += 1
evaluation_agg_entities_type[true.e_type]['partial']['correct'] += 1
evaluation_agg_entities_type[true.e_type]['exact']['correct'] += 1
true_which_overlapped_with_pred.append(true)
found_overlap = True
break
# check for an overlap i.e. not exact boundary match, with true entities
elif find_overlap(true_range, pred_range):
true_which_overlapped_with_pred.append(true)
# Scenario V: There is an overlap (but offsets do not match
# exactly), and the entity type is the same.
# 2.1 overlaps with the same entity type
if pred.e_type == true.e_type:
# overall results
evaluation['strict']['incorrect'] += 1
evaluation['ent_type']['correct'] += 1
evaluation['partial']['partial'] += 1
evaluation['exact']['incorrect'] += 1
# aggregated by entity type results
evaluation_agg_entities_type[true.e_type]['strict']['incorrect'] += 1
evaluation_agg_entities_type[true.e_type]['ent_type']['correct'] += 1
evaluation_agg_entities_type[true.e_type]['partial']['partial'] += 1
evaluation_agg_entities_type[true.e_type]['exact']['incorrect'] += 1
found_overlap = True
break
# Scenario VI: Entities overlap, but the entity type is
# different.
else:
# overall results
evaluation['strict']['incorrect'] += 1
evaluation['ent_type']['incorrect'] += 1
evaluation['partial']['partial'] += 1
evaluation['exact']['incorrect'] += 1
# aggregated by entity type results
# Results against the true entity
# print(pred)
# print(true)
evaluation_agg_entities_type[true.e_type]['strict']['incorrect'] += 1
evaluation_agg_entities_type[true.e_type]['partial']['partial'] += 1
evaluation_agg_entities_type[true.e_type]['ent_type']['incorrect'] += 1
evaluation_agg_entities_type[true.e_type]['exact']['incorrect'] += 1
# Results against the predicted entity
# evaluation_agg_entities_type[pred.e_type]['strict']['spurious'] += 1
found_overlap = True
break
# Scenario II: Entities are spurious (i.e., over-generated).
if not found_overlap:
# Overall results
evaluation['strict']['spurious'] += 1
evaluation['ent_type']['spurious'] += 1
evaluation['partial']['spurious'] += 1
evaluation['exact']['spurious'] += 1
# print(pred)
# Aggregated by entity type results
# print('Pred : ' ,pred)
evaluation_agg_entities_type[pred.e_type]['strict']['spurious'] += 1
evaluation_agg_entities_type[pred.e_type]['partial']['spurious'] += 1
evaluation_agg_entities_type[pred.e_type]['ent_type']['spurious'] += 1
evaluation_agg_entities_type[pred.e_type]['exact']['spurious'] += 1
# NOTE: when pred.e_type is not found in tags
# or when it simply does not appear in the test set, then it is
# spurious, but it is not clear where to assign it at the tag
# level. In this case, it is applied to all target_tags
# found in this example. This will mean that the sum of the
# evaluation_agg_entities will not equal evaluation.
# for true in tags:
# print('True : ' ,true)
# evaluation_agg_entities_type[true]['strict']['spurious'] += 1
# evaluation_agg_entities_type[true]['ent_type']['spurious'] += 1
# evaluation_agg_entities_type[true]['partial']['spurious'] += 1
# evaluation_agg_entities_type[true]['exact']['spurious'] += 1
# Scenario III: Entity was missed entirely.
for true in true_named_entities:
if true in true_which_overlapped_with_pred:
continue
else:
# overall results
evaluation['strict']['missed'] += 1
evaluation['ent_type']['missed'] += 1
evaluation['partial']['missed'] += 1
evaluation['exact']['missed'] += 1
# for the agg. by e_type
evaluation_agg_entities_type[true.e_type]['strict']['missed'] += 1
evaluation_agg_entities_type[true.e_type]['ent_type']['missed'] += 1
evaluation_agg_entities_type[true.e_type]['partial']['missed'] += 1
evaluation_agg_entities_type[true.e_type]['exact']['missed'] += 1
# Compute 'possible', 'actual' according to SemEval-2013 Task 9.1 on the
# overall results, and use these to calculate precision and recall.
for eval_type in evaluation:
evaluation[eval_type] = compute_actual_possible(evaluation[eval_type])
# Compute 'possible', 'actual', and precision and recall on entity level
# results. Start by cycling through the accumulated results.
for entity_type, entity_level in evaluation_agg_entities_type.items():
# Cycle through the evaluation types for each dict containing entity
# level results.
for eval_type in entity_level:
evaluation_agg_entities_type[entity_type][eval_type] = compute_actual_possible(
entity_level[eval_type]
)
return evaluation, evaluation_agg_entities_type
def find_overlap(true_range, pred_range):
"""Find the overlap between two ranges
Find the overlap between two ranges. Return the overlapping values if
present, else return an empty set().
Examples:
>>> find_overlap((1, 2), (2, 3))
2
>>> find_overlap((1, 2), (3, 4))
set()
"""
true_set = set(true_range)
pred_set = set(pred_range)
overlaps = true_set.intersection(pred_set)
return overlaps
def compute_actual_possible(results):
"""
Takes a result dict that has been output by compute metrics.
Returns the results dict with actual, possible populated.
When the results dicts is from partial or ent_type metrics, then
partial_or_type=True to ensure the right calculation is used for
calculating precision and recall.
"""
correct = results['correct']
incorrect = results['incorrect']
partial = results['partial']
missed = results['missed']
spurious = results['spurious']
# Possible: number annotations in the gold-standard which contribute to the
# final score
possible = correct + incorrect + partial + missed
# Actual: number of annotations produced by the NER system
actual = correct + incorrect + spurious
results["actual"] = actual
results["possible"] = possible
return results
def compute_precision_recall(results, partial_or_type=False):
"""
Takes a result dict that has been output by compute metrics.
Returns the results dict with precison and recall populated.
When the results dicts is from partial or ent_type metrics, then
partial_or_type=True to ensure the right calculation is used for
calculating precision and recall.
"""
actual = results["actual"]
possible = results["possible"]
partial = results['partial']
correct = results['correct']
if partial_or_type:
precision = (correct + 0.5 * partial) / actual if actual > 0 else 0
recall = (correct + 0.5 * partial) / possible if possible > 0 else 0
else:
precision = correct / actual if actual > 0 else 0
recall = correct / possible if possible > 0 else 0
results["precision"] = precision
results["recall"] = recall
return results
def compute_precision_recall_wrapper(results):
"""
Wraps the compute_precision_recall function and runs on a dict of results
"""
results_a = {key: compute_precision_recall(value, True) for key, value in results.items() if
key in ['partial', 'ent_type']}
results_b = {key: compute_precision_recall(value) for key, value in results.items() if
key in ['strict', 'exact']}
results = {**results_a, **results_b}
return results
def map_index2label(text, labels):
index2label = {}
start_index = 0
for i, word in enumerate(text):
end_index = start_index + len(word)
index2label[(start_index, end_index)] = labels[i]
start_index = end_index + 1
return index2label
def align_prediction(texts, labels_list, model, tag_map):
y_true = []
y_pred = []
index = 0
for text, labels in zip(texts, labels_list):
index2label_true = map_index2label(text, labels)
y_true += [value for key, value in index2label_true.items()]
model_result = model(" ".join(text))
index2label_pred = {}
for key, value in index2label_true.items():
temp = []
for item in model_result:
if item['start'] >= key[0] and item['end'] <= key[1]:
temp.append(item['entity'])
index2label_pred[(key[0], key[1])] = temp[0]
y_pred += [ tag_map[value] for key, value in index2label_pred.items()]
index += 1
if index%10 == 0:
print(index)
if index==500:
break
return [y_true], [y_pred]
def benchmark(y_true, y_pred, defined_labels_to_evaluate):
evaluator = Evaluator(y_true, y_pred, tags=defined_labels_to_evaluate)
return evaluator.evaluate()
```
### Albert
```
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/albert-fa-zwnj-base-v2-ner")
model = AutoModelForTokenClassification.from_pretrained("HooshvareLab/albert-fa-zwnj-base-v2-ner")
model.eval()
albert_ner = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-DAT": 'O',
"B-EVE": "B-EVE",
"B-FAC": "B-ORG",
"B-LOC": "B-LOC",
"B-MON": "O",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-PRO": "O",
"B-TIM": "O",
"B-PCT": "O",
"I-DAT": "O",
"I-EVE": "I-EVE",
"I-FAC": "I-ORG",
"I-LOC": "I-LOC",
"I-MON": "O",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-PRO": "O",
"I-TIM": "O",
"I-PCT": "O",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, albert_ner, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE'])
```
### Pars Bert
```
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-ner-uncased")
model = AutoModelForTokenClassification.from_pretrained("HooshvareLab/bert-base-parsbert-ner-uncased")
model.eval()
parsbert_ner = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-date": 'O',
"B-event": "B-EVE",
"B-facility": "B-ORG",
"B-location": "B-LOC",
"B-money": "O",
"B-organization": "B-ORG",
"B-person": "B-PER",
"B-product": "O",
"B-time": "O",
"B-percent": "O",
"I-date": "O",
"I-event": "I-EVE",
"I-facility": "I-ORG",
"I-location": "I-LOC",
"I-money": "O",
"I-organization": "I-ORG",
"I-person": "I-PER",
"I-product": "O",
"I-time": "O",
"I-percent": "O",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, parsbert_ner, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE'])
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE'])
```
### XLMR
```
nlp_ner = pipeline(
"ner",
model="jplu/tf-xlm-r-ner-40-lang",
tokenizer=(
'jplu/tf-xlm-r-ner-40-lang',
{"use_fast": True}),
framework="tf",
ignore_labels=[],
)
tag_map = {
"PER": 'B-PER',
"LOC": "B-LOC",
"ORG": "B-ORG",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, nlp_ner, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### Our Model: using fine tunning
```
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base")
twiner_seq = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-EVE": "B-EVE",
"B-LOC": "B-LOC",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-POG": "B_POG",
"B-NAT": "B-NAT",
"I-EVE": "I-EVE",
"I-LOC": "I-LOC",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-POG": "I_POG",
"I-NAT": "I-NAT",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, twiner_seq, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE'])
y_true, y_perd = align_prediction(texts, tags, twiner_seq, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE', 'POG', 'NAT'])
tag_map = {
"B-EVE": "O",
"B-LOC": "B_LOC",
"B-ORG": "B_ORG",
"B-PER": "B_PER",
"B-POG": "O",
"B-NAT": "O",
"I-EVE": "O",
"I-LOC": "I_LOC",
"I-ORG": "I_ORG",
"I-PER": "I_PER",
"I-POG": "O",
"I-NAT": "O",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, twiner_seq, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### Our Model using MTL
```
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl")
twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-EVE": "B-EVE",
"B-LOC": "B-LOC",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-POG": "B-POG",
"B-NAT": "B-NAT",
"I-EVE": "I-EVE",
"I-LOC": "I-LOC",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-POG": "I_POG",
"I-NAT": "I-NAT",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE'])
y_true, y_perd = align_prediction(texts, tags, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE', 'POG', 'NAT'])
tag_map = {
"B-EVE": "O",
"B-LOC": "B_LOC",
"B-ORG": "B_ORG",
"B-PER": "B_PER",
"B-POG": "O",
"B-NAT": "O",
"I-EVE": "O",
"I-LOC": "I_LOC",
"I-ORG": "I_ORG",
"I-PER": "I_PER",
"I-POG": "O",
"I-NAT": "O",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### Our Model Peyma using MTL
```
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/peyma-ner-bert-base")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/peyma-ner-bert-base")
peyma_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B_LOC": "B_LOC",
"B_ORG": "B_ORG",
"B_PER": "B_PER",
"B_MON": "B_MON",
"B_PCT": "B_PCT",
"B_DAT": "B_DAT",
"B_TIM": "B_TIM",
"I_LOC": "I_LOC",
"I_ORG": "I_ORG",
"I_PER": "I_PER",
"I_MON": "I_MON",
"I_PCT": "I_PCT",
"I_DAT": "I_DAT",
"I_TIM": "I_TIM",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, peyma_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'MON', 'PCT', 'DAT', 'TIM'])
tag_map = {
"B_LOC": "B_LOC",
"B_ORG": "B_ORG",
"B_PER": "B_PER",
"B_MON": "B_MON",
"B_PCT": "B_PCT",
"B_DAT": "B_DAT",
"B_TIM": "B_TIM",
"I_LOC": "I_LOC",
"I_ORG": "I_ORG",
"I_PER": "I_PER",
"I_MON": "I_MON",
"I_PCT": "I_PCT",
"I_DAT": "I_DAT",
"I_TIM": "I_TIM",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, peyma_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
tag_map = {
"B_LOC": "B-LOC",
"B_ORG": "B-ORG",
"B_PER": "B-PER",
"B_MON": "O",
"B_PCT": "O",
"B_DAT": "O",
"B_TIM": "O",
"I_LOC": "I-LOC",
"I_ORG": "I-ORG",
"I_PER": "I-PER",
"I_MON": "O",
"I_PCT": "O",
"I_DAT": "O",
"I_TIM": "O",
'O': 'O'
}
y_true, y_perd = align_prediction(texts, tags, peyma_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### Pars BErt Peyma
```
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model = AutoModelForTokenClassification.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model.eval()
parsbert_peyma = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B_LOC": "B_LOC",
"B_ORG": "B_ORG",
"B_PER": "B_PER",
"B_MON": "B_MON",
"B_PCT": "B_PCT",
"B_DAT": "B_DAT",
"B_TIM": "B_TIM",
"I_LOC": "I_LOC",
"I_ORG": "I_ORG",
"I_PER": "I_PER",
"I_MON": "I_MON",
"I_PCT": "I_PCT",
"I_DAT": "I_DAT",
"I_TIM": "I_TIM",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, parsbert_peyma, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'MON', 'PCT', 'DAT', 'TIM'])
tag_map = {
"B_LOC": "B_LOC",
"B_ORG": "B_ORG",
"B_PER": "B_PER",
"B_MON": "B_MON",
"B_PCT": "B_PCT",
"B_DAT": "B_DAT",
"B_TIM": "B_TIM",
"I_LOC": "I_LOC",
"I_ORG": "I_ORG",
"I_PER": "I_PER",
"I_MON": "I_MON",
"I_PCT": "I_PCT",
"I_DAT": "I_DAT",
"I_TIM": "I_TIM",
'O': 'O'
}
y_true, y_perd = align_prediction(texts_peyma, tags_peyma, parsbert_peyma, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### Test with Masked entities
```
new_text, new_tag = [], []
for text, tweet_tag in zip(texts, tags):
if len([tag for tag in tweet_tag if tag.startswith('B')]) == 1:
new_text.append(text)
new_tag.append(tweet_tag)
len([x for x in new_tag if 'B-POG' in x])
masked_texts = []
for text, tweet_tag in zip(new_text, new_tag):
masked_text = []
for word, tag in zip(text, tweet_tag):
if tag != 'O':
masked_text.append('[MASK]')
else:
masked_text.append(word)
masked_texts.append(masked_text)
with open('masked_texts_test.txt', 'w') as file:
for text in masked_texts:
file.write('\n'.join(text))
file.write('\n\n')
!wget -q --show-progress https://raw.githubusercontent.com/overfit-ir/persian-twitter-ner/master/masked_texts.txt
human_tags = []
with open('masked_texts.txt', 'r') as file:
for line in file.readlines():
if line != '\n':
s = line.split('\t')
if len(s) == 1:
human_tags.append('O')
else:
human_tags.append(s[1].replace('\n', ''))
len([x for tweet_tag in new_tag for x in tweet_tag])
len(human_tags)
benchmark([[x for tweet_tag in new_tag for x in tweet_tag]], [human_tags], ['PER', 'LOC', 'ORG', 'NAT', 'POG'])
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl")
twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-EVE": "B-EVE",
"B-LOC": "B-LOC",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-POG": "B-POG",
"B-NAT": "B-NAT",
"I-EVE": "I-EVE",
"I-LOC": "I-LOC",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-POG": "I-POG",
"I-NAT": "I-NAT",
'O': 'O'
}
y_true, y_perd = align_prediction(masked_texts, new_tag, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'NAT', 'POG'])
masked_texts_peyma = []
for text, tweet_tag in zip(texts_peyma, tags_peyma):
masked_text = []
for word, tag in zip(text, tweet_tag):
if tag != 'O':
masked_text.append('[MASK]')
else:
masked_text.append(word)
masked_texts_peyma.append(masked_text)
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model = AutoModelForTokenClassification.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model.eval()
parsbert_peyma = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B_LOC": "B_LOC",
"B_ORG": "B_ORG",
"B_PER": "B_PER",
"B_MON": "B_MON",
"B_PCT": "B_PCT",
"B_DAT": "B_DAT",
"B_TIM": "B_TIM",
"I_LOC": "I_LOC",
"I_ORG": "I_ORG",
"I_PER": "I_PER",
"I_MON": "I_MON",
"I_PCT": "I_PCT",
"I_DAT": "I_DAT",
"I_TIM": "I_TIM",
'O': 'O'
}
y_true, y_perd = align_prediction(masked_texts_peyma, tags_peyma, parsbert_peyma, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'MON', 'PCT', 'DAT', 'TIM'])
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
```
### TEst with word only
```
new_texts = []
new_tags = []
for text, tweet_tag in zip(texts, tags):
new_text = []
new_tag = []
flag = False
for word, tag in zip(text, tweet_tag):
if tag.startswith('B') and flag == False:
flag = True
new_text.append(word)
new_tag.append(tag)
elif tag.startswith('B') and flag == True:
new_texts.append(new_text)
new_tags.append(new_tag)
flag = True
new_text = []
new_tag = []
new_text.append(word)
new_tag.append(tag)
elif tag.startswith('I'):
new_text.append(word)
new_tag.append(tag)
elif flag:
new_texts.append(new_text)
new_tags.append(new_tag)
flag = False
new_text = []
new_tag = []
new_texts[5], new_tags[5]
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl")
twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-EVE": "B-EVE",
"B-LOC": "B-LOC",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-POG": "B-POG",
"B-NAT": "B-NAT",
"I-EVE": "I-EVE",
"I-LOC": "I-LOC",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-POG": "I-POG",
"I-NAT": "I-NAT",
'O': 'O'
}
y_true, y_perd = align_prediction(new_texts, new_tags, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG', 'EVE', 'POG', 'NAT'])
```
### Coverage of tokens in test set
#### Twiner
```
testset_named_entities = []
for tweet, tweet_tags in zip(texts, tags):
for word, tag in zip(tweet, tweet_tags):
if tag != 'O':
testset_named_entities.append(word)
trainset_named_entities = []
for tweet, tweet_tags in zip(texts_train, tags_train):
for word, tag in zip(tweet, tweet_tags):
if tag != 'O':
trainset_named_entities.append(word)
diff_test_train = set(testset_named_entities) - set(trainset_named_entities)
len(set(diff_test_train))
diff_test_train
len(set(testset_named_entities))
diff_tweet = []
diff_tags = []
for tweet, tweet_tags in zip(texts, tags):
flag = False
for word, tag in zip(tweet, tweet_tags):
if tag != 'O':
if word not in diff_test_train:
flag = True
if flag == False:
diff_tweet.append(tweet)
diff_tags.append(tweet_tags)
len(diff_tweet)
len(texts)
diff_tweet
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl")
twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
tag_map = {
"B-EVE": "B-EVE",
"B-LOC": "B-LOC",
"B-ORG": "B-ORG",
"B-PER": "B-PER",
"B-POG": "B-POG",
"B-NAT": "B-NAT",
"I-EVE": "I-EVE",
"I-LOC": "I-LOC",
"I-ORG": "I-ORG",
"I-PER": "I-PER",
"I-POG": "I_POG",
"I-NAT": "I-NAT",
'O': 'O'
}
y_true, y_perd = align_prediction(diff_tweet, diff_tags, twiner_mtl, tag_map)
benchmark(y_true, y_perd, ['PER', 'LOC', 'ORG'])
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model = AutoModelForTokenClassification.from_pretrained("HooshvareLab/bert-base-parsbert-peymaner-uncased")
model.eval()
parsbert_peyma = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
```
#### Peyma
```
testset_named_entities = []
for tweet, tweet_tags in zip(texts_peyma, tags_peyma):
for word, tag in zip(tweet, tweet_tags):
if tag != 'O':
testset_named_entities.append(word)
trainset_named_entities = []
for tweet, tweet_tags in zip(texts_peyma_train, tags_peyma_train):
for word, tag in zip(tweet, tweet_tags):
if tag != 'O':
trainset_named_entities.append(word)
diff_test_train = set(testset_named_entities) - set(trainset_named_entities)
len(diff_test_train)
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Tutorial: Gaussian pulse initial data for a massless scalar field in spherical-like coordinates
## Authors: Leonardo Werneck and Zach Etienne
# This tutorial notebook explains how to obtain time-symmetric initial data for the problem of gravitational collapse of a massless scalar field. We will be following the approaches of [Akbarian & Choptuik (2015)](https://arxiv.org/pdf/1508.01614.pdf) and [Baumgarte (2018)](https://arxiv.org/pdf/1807.10342.pdf).
**Notebook Status**: <font color='green'><b> Validated </b></font>
**Validation Notes**: The initial data generated by the NRPy+ module corresponding to this tutorial notebook are used shown to satisfy Einstein's equations as expected [in this tutorial notebook](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_ScalarField_initial_data.ipynb).</font>
## Python module which performs the procedure described in this tutorial: [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py)
## References
* [Akbarian & Choptuik (2015)](https://arxiv.org/pdf/1508.01614.pdf) (Useful to understand the theoretical framework)
* [Baumgarte (2018)](https://arxiv.org/pdf/1807.10342.pdf) (Useful to understand the theoretical framework)
* [Baumgarte & Shapiro's Numerical Relativity](https://books.google.com.br/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC&redir_esc=y): Section 6.2.2 (Useful to understand how to solve the Hamiltonian constraint)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
1. [Step 1](#initial_data) Setting up time-symmetric initial data
1. [Step 1.a](#id_time_symmetry) Time symmetry: $\tilde{K}_{ij}$, $\tilde K$, $\tilde\beta^{i}$, and $\tilde B^{i}$
1. [Step 1.b](#id_sf_ic) The scalar field initial condition: $\tilde{\varphi}$, $\tilde{\Phi}$, $\tilde{\Pi}$
1. [Step 1.c](#id_metric) The physical metric: $\tilde{\gamma}_{ij}$
1. [Step 1.c.i](#id_conformal_metric) The conformal metric $\bar\gamma_{ij}$
1. [Step 1.c.ii](#id_hamiltonian_constraint) Solving the Hamiltonian constraint
1. [Step 1.c.ii.1](#id_tridiagonal_matrix) The tridiagonal matrix: $A$
1. [Step 1.c.ii.2](#id_tridiagonal_rhs) The right-hand side of the linear system: $\vec{s}$
1. [Step 1.c.ii.3](#id_conformal_factor) The conformal factor: $\psi$
1. [Step 1.d](#id_lapse_function) The lapse function: $\tilde{\alpha}$
1. [Step 1.e](#id_output) Outputting the initial data to file
1. [Step 2](#id_interpolation_files) Interpolating the initial data file as needed
1. [Step 3](#id_sph_to_curvilinear) Converting Spherical initial data to Curvilinear initial data
1. [Step 4](#validation) Validation of this tutorial against the [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py) module
1. [Step 5](#output_to_pdf) Output this module as $\LaTeX$-formatted PDF file
<a id='initial_data'></a>
# Step 1: Setting up time-symmetric initial data \[Back to [top](#toc)\]
$$\label{initial_data}$$
In this section we will set up time symmetric initial data for the gravitational collapse of a massless scalar field, in spherical coordinates. Our discussion will follow closely section III.A of [Akbarian & Choptuik (2015)](https://arxiv.org/pdf/1508.01614.pdf) (henceforth A&C). We will be using a *uniform* radial sampling. All initial data quantities will be written with tildes over them, meaning that, for example, $\tilde{\alpha} \equiv \alpha(0,r)$.
<a id='id_time_symmetry'></a>
## Step 1.a: Time symmetry: $\tilde{K}_{ij}$, $\tilde K$, $\tilde\beta^{i}$, and $\tilde B^{i}$ \[Back to [top](#toc)\]
$$\label{id_time_symmetry}$$
We are here considering a spherically symmetric problem, so that $f=f(t,r)$, for every function discussed in this tutorial. The demand for time-symmetric initial data then imples that
\begin{align}
\tilde K_{ij} &= 0\ ,\\
\tilde K &= 0\ ,\\
\tilde \beta^{i} &= 0\ ,\\
\tilde B^{i} &= 0\ .
\end{align}
For the scalar field, $\varphi$, it also demands
$$
\partial_{t}\varphi(0,r) = 0\ ,
$$
which we discuss below.
<a id='id_sf_ic'></a>
## Step 1.b: The scalar field initial condition: $\tilde{\varphi}$, $\tilde{\Phi}$, $\tilde{\Pi}$ \[Back to [top](#toc)\]
$$\label{id_sf_ic}$$
We will be implementing the following options for the initial profile of the scalar field
$$
\begin{aligned}
\tilde{\varphi}_{\rm I} &= \varphi_{0}\exp\left(-\frac{r^{2}}{\sigma^{2}}\right)\ ,\\
\tilde{\varphi}_{\rm II} &= \varphi_{0}r^{3}\exp\left[-\left(\frac{r-r_{0}}{\sigma}\right)^{2}\right]\ ,\\
\tilde{\varphi}_{\rm III} &= \varphi_{0}\left\{1 - \tanh\left[\left(\frac{r-r_{0}}{\sigma}\right)^{2}\right]\right\}.
\end{aligned}
$$
We introduce the two auxiliary fields
$$
\tilde\Phi\equiv\partial_{r}\tilde\varphi\quad \text{and}\quad \Pi\equiv-\frac{1}{\alpha}\left(\partial_{t}\varphi - \beta^{i}\partial_{i}\varphi\right)\ ,
$$
of which $\tilde\Phi$ will only be used as an auxiliary variable for setting the initial data, but $\Pi$ is a dynamical variable which will be evolved in time. Because we are setting time-symmetric initial data, $\partial_{t}\sf = 0 = \beta^{i}$, and thus $\tilde\Pi=0$.
```
import os,sys,shutil
import numpy as np
import sympy as sp
import cmdline_helper as cmd
Ccodesdir = "ScalarFieldID_validation"
shutil.rmtree(Ccodesdir, ignore_errors=True)
cmd.mkdir(Ccodesdir)
###################################################################
# Part A.1: Setting up the initial condition for the scalar field #
###################################################################
# Part A.1a: set up the necessary variables: number of radial points = NR
# the radial variable rr in [0,RMAX],
# \varphi(0,r) = ID_sf (for scalar field),
# \Phi(0,r) = ID_sf_dD0 (for the radial derivative of sf) ,
# \Pi(0,r) = ID_sfM (for scalar field conjutage Momentum).
RMAX = 50
NR = 30000
# Available options are: Gaussian_pulse, Gaussian_pulsev2, and Tanh_pulse
ID_Family = "Gaussian_pulsev2"
CoordSystem = "Spherical"
sinhA = RMAX
sinhW = 0.1
if CoordSystem == "Spherical":
r = np.linspace(0,RMAX,NR+1) # Set the r array
dr = np.zeros(NR)
for i in range(NR):
dr[i] = r[1]-r[0]
r = np.delete(r-dr[0]/2,0) # Shift the vector by -dr/2 and remove the negative entry
elif CoordSystem == "SinhSpherical":
if sinhA == None or sinhW == None:
print("Error: SinhSpherical coordinates require initialization of both sinhA and sinhW")
sys.exit(1)
else:
x = np.linspace(0,1.0,NR+1)
dx = 1.0/(NR+1)
x = np.delete(x-dx/2,0) # Shift the vector by -dx/2 and remove the negative entry
r = sinhA * np.sinh( x/sinhW ) / np.sinh( 1.0/sinhW )
dr = sinhA * np.cosh( x/sinhW ) / np.sinh( 1.0/sinhW ) * dx
else:
print("Error: Unknown coordinate system")
sys.exit(1)
# Set the step size squared
dr2 = dr**2
# Let's begin by setting the parameters involved in the initial data
phi0,rr,rr0,sigma = sp.symbols("phi0 rr rr0 sigma",real=True)
# Now set the initial profile of the scalar field
if ID_Family == "Gaussian_pulse":
phiID = phi0 * sp.exp( -r**2/sigma**2 )
elif ID_Family == "Gaussian_pulsev2":
phiID = phi0 * rr**3 * sp.exp( -(rr-rr0)**2/sigma**2 )
elif ID_Family == "Tanh_pulse":
phiID = phi0 * ( 1 - sp.tanh( (rr-rr0)**2/sigma**2 ) )
else:
print("Unkown initial data family: ",ID_Family)
print("Available options are: Gaussian_pulse, Gaussian_pulsev2, and Tanh_pulse")
sys.exit(1)
# Now compute Phi := \partial_{r}phi
PhiID = sp.diff(phiID,rr)
# Now set numpy functions for phi and Phi
phi = sp.lambdify((phi0,rr,rr0,sigma),phiID)
Phi = sp.lambdify((phi0,rr,rr0,sigma),PhiID)
# ## Part A.1c: populating the varphi(0,r) array
phi0 = 0.1
r0 = 0
sigma = 1
ID_sf = phi(phi0,r,r0,sigma)
```
<a id='id_metric'></a>
## Step 1.c: The physical metric: $\tilde{\gamma}_{ij}$ \[Back to [top](#toc)\]
$$\label{id_metric}$$
<a id='id_conformal_metric'></a>
### Step 1.c.i: The conformal metric $\bar\gamma_{ij}$ \[Back to [top](#toc)\]
$$\label{id_conformal_metric}$$
To set up the physical metric initial data, $\tilde\gamma_{ij}$, we will start by considering the conformal transformation
$$
\gamma_{ij} = e^{4\phi}\bar\gamma_{ij}\ ,
$$
where $\bar\gamma_{ij}$ is the conformal metric and $e^{\phi}$ is the conformal factor. We then fix the initial value of $\bar\gamma_{ij}$ according to eqs. (32) and (43) of [A&C](https://arxiv.org/pdf/1508.01614.pdf)
$$
\bar\gamma_{ij} = \hat\gamma_{ij}\ ,
$$
where $\hat\gamma_{ij}$ is the *reference metric*, which is the flat metric in spherical symmetry
$$
\hat\gamma_{ij}
=
\begin{pmatrix}
1 & 0 & 0\\
0 & r^{2} & 0\\
0 & 0 & r^{2}\sin^{2}\theta
\end{pmatrix}\ .
$$
To determine the physical metric, we must then determine the conformal factor $e^{\phi}$. This is done by solving the Hamiltonian constraint (cf. eq. (12) of [Baumgarte](https://arxiv.org/pdf/1807.10342.pdf))
$$
\hat\gamma^{ij}\hat D_{i}\hat D_{j}\psi = -2\pi\psi^{5}\rho\ ,
$$
where $\psi\equiv e^{\tilde\phi}$. For a massless scalar field, we know that
$$
T^{\mu\nu} = \partial^{\mu}\varphi\partial^{\nu}\varphi - \frac{1}{2}g^{\mu\nu}\left(\partial^{\lambda}\varphi\partial_{\lambda}\varphi\right)\ .
$$
where $g^{\mu\nu}$ is the inverse of the ADM 4-metric given by eq. (2.119) of [Baumgarte & Shapiro's Numerical Relativity](https://books.google.com.br/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC&redir_esc=y),
$$
g^{\mu\nu}=\begin{pmatrix}
-\alpha^{-2} & \alpha^{-2}\beta^{i}\\
\alpha^{-2}\beta^{j} & \gamma^{ij} - \alpha^{-2}\beta^{i}\beta^{j}
\end{pmatrix}\ .
$$
We know that (see Step 2 in [this tutorial module](Tutorial-ADM_Setting_up_massless_scalar_field_Tmunu.ipynb) for the details)
\begin{align}
\partial^{t}\varphi &= \alpha^{-1}\Pi\ ,\\
\partial^{\lambda}\varphi\partial_{\lambda}\varphi &= -\Pi^{2} + \gamma^{ij}\partial_{i}\varphi\partial_{j}\varphi\ .
\end{align}
The tt-component of the energy-momentum tensor at the initial time is then given by (we will ommit the "tildes" below to avoid cluttering the equation, but keep in mind that all quantities are considered at $t=0$)
\begin{align}
T^{tt} &= \left(\partial^{t}\varphi\right)^{2} - \frac{1}{2} g^{tt}\left(\partial^{\lambda}\varphi\partial_{\lambda}\varphi\right)\nonumber\\
&= \left(\frac{\Pi}{\alpha}\right)^{2} - \frac{1}{2}\left(-\frac{1}{\alpha^{2}}\right)\left(-\Pi^{2} + \gamma^{ij}\partial_{i}\varphi\partial_{j}\varphi\right)\nonumber\\
&= \frac{\gamma^{ij}\partial_{i}\varphi\partial_{j}\varphi}{2\alpha^{2}}\nonumber\\
&= \frac{e^{-4\phi}\bar\gamma^{ij}\partial_{i}\varphi\partial_{j}\varphi}{2\alpha^{2}}\nonumber\\
&= \frac{e^{-4\phi}\hat\gamma^{ij}\partial_{i}\varphi\partial_{j}\varphi}{2\alpha^{2}}\nonumber\\
&= \frac{e^{-4\phi}\hat\gamma^{rr}\partial_{r}\varphi\partial_{r}\varphi}{2\alpha^{2}}\nonumber\\
&= \frac{e^{-4\phi}\Phi^{2}}{2\alpha^{2}}\nonumber\\
\end{align}
By remembering the definition of the normal vector $n_{\mu} = (-\alpha,0,0,0)$ (eq. (2.117) of [B&S](https://books.google.com.br/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC&redir_esc=y)), we can then evaluate the energy density $\rho$ given by eq. (24) of [A&C](https://arxiv.org/pdf/1508.01614.pdf)
$$
\tilde\rho = \tilde n_{\mu}\tilde n_{\nu}\tilde T^{\mu\nu} = \frac{e^{-4\tilde\phi}}{2}\tilde\Phi^{2}\ .
$$
Plugging this result in the Hamiltonian constraint, remembering that $\psi\equiv e^{\tilde\phi}$, we have
$$
\partial^{2}_{r}\psi + \frac{2}{r}\partial_{r}\psi + \pi\psi\Phi^{2} = 0\ .
$$
This is a linear elliptic equation which will solve using the procedure described in detail in section 6.2.2 of [B&S](https://books.google.com.br/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC&redir_esc=y).
<a id='id_hamiltonian_constraint'></a>
### Step 1.c.ii: Solving the Hamiltonian constraint \[Back to [top](#toc)\]
$$\label{id_hamiltonian_constraint}$$
We will discretize the Hamiltonian constraint using [second-order accurate finite differences](https://en.wikipedia.org/wiki/Finite_difference_coefficient). We get
$$
\frac{\psi_{i+1} - 2\psi_{i} + \psi_{i-1}}{\Delta r^{2}} + \frac{2}{r_{i}}\left(\frac{\psi_{i+1}-\psi_{i-1}}{2\Delta r}\right) + \pi\psi_{i}\Phi^{2}_{i} = 0\ ,
$$
or, by multiplying the entire equation by $\Delta r^{2}$ and then grouping the coefficients of each $\psi_{j}$:
$$
\boxed{\left(1-\frac{\Delta r}{r_{i}}\right)\psi_{i-1}+\left(\pi\Delta r^{2}\Phi_{i}^{2}-2\right)\psi_{i} + \left(1+\frac{\Delta r}{r_{i}}\right)\psi_{i+1} = 0}\ .
$$
We choose to set up a grid that is cell-centered, with:
$$
r_{i} = \left(i-\frac{1}{2}\right)\Delta r\ ,
$$
so that $r_{0} = - \frac{\Delta r}{2}$. This is a two-point boundary value problem, which we solve using the same strategy as [A&C](https://arxiv.org/pdf/1508.01614.pdf), described in eqs. (48)-(50):
\begin{align}
\left.\partial_{r}\psi\right|_{r=0} &= 0\ ,\\
\lim_{r\to\infty}\psi &= 1\ .
\end{align}
In terms of our grid structure, the first boundary condition (regularity at the origin) is written to second-order in $\Delta r$ as:
$$
\left.\partial_{r}\psi\right|_{r=0} = \frac{\psi_{1} - \psi_{0}}{\Delta r} = 0 \Rightarrow \psi_{0} = \psi_{1}\ .
$$
The second boundary condition (asymptotic flatness) can be interpreted as
$$
\psi_{N} = 1 + \frac{C}{r_{N}}\ (r_{N}\gg1)\ ,
$$
which then implies
$$
\partial_{r}\psi_{N} = -\frac{C}{r_{N}^{2}} = -\frac{1}{r_{N}}\left(\frac{C}{r_{N}}\right) = -\frac{1}{r_{N}}\left(\psi_{N} - 1\right) = \frac{1-\psi_{N}}{r_{N}}\ ,
$$
which can then be written as
$$
\frac{\psi_{N+1}-\psi_{N-1}}{2\Delta r} = \frac{1-\psi_{N}}{r_{N}}\Rightarrow \psi_{N+1} = \psi_{N-1} - \frac{2\Delta r}{r_{N}}\psi_{N} + \frac{2\Delta r}{r_{N}}\ .
$$
Substituting the boundary conditions at the boxed equations above, we end up with
\begin{align}
\left(\pi\Delta r^{2}\Phi^{2}_{1} - 1 - \frac{\Delta r}{r_{1}}\right)\psi_{1} + \left(1+\frac{\Delta r}{r_{1}}\right)\psi_{2} = 0\quad &(i=1)\ ,\\
\left(1-\frac{\Delta r}{r_{i}}\right)\psi_{i-1}+\left(\pi\Delta r^{2}\Phi_{i}^{2}-2\right)\psi_{i} + \left(1+\frac{\Delta r}{r_{i}}\right)\psi_{i+1} = 0\quad &(1<i<N)\ ,\\
2\psi_{N-1} + \left[\pi\Delta r^{2}\Phi^{2}_{N} - 2 - \frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)\right]\psi_{N} = - \frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)\quad &(i=N)\ .
\end{align}
This results in the following tridiagonal system of linear equations
$$
A \cdot \vec{\psi} = \vec{s}\Rightarrow \vec{\psi} = A^{-1}\cdot\vec{s}\ ,
$$
where
$$
A=\begin{pmatrix}
\left(\pi\Delta r^{2}\Phi^{2}_{1} - 1 - \frac{\Delta r}{r_{1}}\right) & \left(1+\frac{\Delta r}{r_{1}}\right) & 0 & 0 & 0 & 0 & 0\\
\left(1-\frac{\Delta r}{r_{2}}\right) & \left(\pi\Delta r^{2}\Phi_{2}^{2}-2\right) & \left(1+\frac{\Delta r}{r_{2}}\right) & 0 & 0 & 0 & 0\\
0 & \ddots & \ddots & \ddots & 0 & 0 & 0\\
0 & 0 & \left(1-\frac{\Delta r}{r_{i}}\right) & \left(\pi\Delta r^{2}\Phi_{i}^{2}-2\right) & \left(1+\frac{\Delta r}{r_{i}}\right) & 0 & 0\\
0 & 0 & 0 & \ddots & \ddots & \ddots & 0\\
0 & 0 & 0 & 0 & \left(1-\frac{\Delta r}{r_{N-1}}\right) & \left(\pi\Delta r^{2}\Phi_{N-1}^{2}-2\right) & \left(1+\frac{\Delta r}{r_{N-1}}\right)\\
0 & 0 & 0 & 0 & 0 & 2 & \left[\pi\Delta r^{2}\Phi^{2}_{N} - 2 - \frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)\right]
\end{pmatrix}\ ,
$$
$$
\vec{\psi} =
\begin{pmatrix}
\psi_{1}\\
\psi_{2}\\
\vdots\\
\psi_{i}\\
\vdots\\
\psi_{N-1}\\
\psi_{N}
\end{pmatrix}\ ,
$$
and
$$
\vec{s} =
\begin{pmatrix}
0\\
0\\
\vdots\\
0\\
\vdots\\
0\\
-\frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)
\end{pmatrix}
$$
<a id='id_tridiagonal_matrix'></a>
#### Step 1.c.ii.1: The tridiagonal matrix: $A$ \[Back to [top](#toc)\]
$$\label{id_tridiagonal_matrix}$$
We now start solving the tridiagonal linear system. We start by implementing the tridiagonal matrix $A$ defined above. We break down it down by implementing each diagonal into an array. We start by looking at the main diagonal:
$$
{\rm diag}_{\rm main}
=
\begin{pmatrix}
\left(\pi\Delta r^{2}\Phi^{2}_{1} - 1 - \frac{\Delta r}{r_{1}}\right)\\
\left(\pi\Delta r^{2}\Phi_{2}^{2}-2\right)\\
\vdots\\
\left(\pi\Delta r^{2}\Phi_{i}^{2}-2\right)\\
\vdots\\
\left(\pi\Delta r^{2}\Phi_{N-1}^{2}-2\right)\\
\left[\pi\Delta r^{2}\Phi^{2}_{N} - 2 - \frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)\right]\\
\end{pmatrix}
=
\begin{pmatrix}
\left(\pi\Delta r^{2}\Phi^{2}_{1} - 2\right)\\
\left(\pi\Delta r^{2}\Phi_{2}^{2} - 2\right)\\
\vdots\\
\left(\pi\Delta r^{2}\Phi_{i}^{2} - 2\right)\\
\vdots\\
\left(\pi\Delta r^{2}\Phi_{N-1}^{2}-2\right)\\
\left(\pi\Delta r^{2}\Phi^{2}_{N} - 2\right)\\
\end{pmatrix}
+
\left.\begin{pmatrix}
1 - \frac{\Delta r}{r_{1}}\\
0\\
\vdots\\
0\\
\vdots\\
0\\
- \frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)
\end{pmatrix}\quad \right\}\text{N elements}
$$
```
# Set the main diagonal
main_diag = np.pi * dr2 * Phi(phi0,r,r0,sigma)**2 - 2
# Update the first element of the main diagonal
main_diag[0] += 1 - dr[0]/r[0]
# Update the last element of the main diagonal
main_diag[NR-1] += - (2 * dr[NR-1] / r[NR-1])*(1 + dr[NR-1] / r[NR-1])
```
Then we look at the upper diagonal of the A matrix:
$$
{\rm diag}_{\rm upper}
=
\left.\begin{pmatrix}
1+\frac{\Delta r}{r_{1}}\\
1+\frac{\Delta r}{r_{2}}\\
\vdots\\
1+\frac{\Delta r}{r_{i}}\\
\vdots\\
1+\frac{\Delta r}{r_{N-2}}\\
1+\frac{\Delta r}{r_{N-1}}
\end{pmatrix}\quad\right\}\text{N-1 elements}
$$
```
# Set the upper diagonal, ignoring the last point in the r array
upper_diag = np.zeros(NR)
upper_diag[1:] = 1 + dr[:-1]/r[:-1]
```
Finally, we look at the lower diagonal of the A matrix:
$$
{\rm diag}_{\rm lower}
=
\left.\begin{pmatrix}
1-\frac{\Delta r}{r_{2}}\\
1-\frac{\Delta r}{r_{3}}\\
\vdots\\
1-\frac{\Delta r}{r_{i+1}}\\
\vdots\\
1-\frac{\Delta r}{r_{N-1}}\\
2
\end{pmatrix}\quad\right\}\text{N-1 elements}
$$
```
# Set the lower diagonal, start counting the r array at the second element
lower_diag = np.zeros(NR)
lower_diag[:-1] = 1 - dr[1:]/r[1:]
# Change the last term in the lower diagonal to its correct value
lower_diag[NR-2] = 2
```
Finally, we construct the tridiagonal matrix by adding the three diagonals, while shifting the upper and lower diagonals to the right and left, respectively. Because A is a sparse matrix, we will also use scipy to solve the linear system faster.
```
!pip install scipy >/dev/null
# Set the sparse matrix A by adding up the three diagonals
from scipy.sparse import spdiags
from scipy.sparse import csc_matrix
A = spdiags([main_diag,upper_diag,lower_diag],[0,1,-1],NR,NR)
# Then compress the sparse matrix A column wise, so that SciPy can invert it later
A = csc_matrix(A)
```
<a id='id_tridiagonal_rhs'></a>
#### Step 1.c.ii.2 The right-hand side of the linear system: $\vec{s}$ \[Back to [top](#toc)\]
$$\label{id_tridiagonal_rhs}$$
We now focus our attention to the implementation of the $\vec{s}$ vector:
$$
\vec{s} =
\begin{pmatrix}
0\\
0\\
\vdots\\
0\\
\vdots\\
0\\
-\frac{2\Delta r}{r_{N}}\left(1+\frac{\Delta r}{r_{N}}\right)
\end{pmatrix}
$$
```
# Set up the right-hand side of the linear system: s
s = np.zeros(NR)
# Update the last entry of the vector s
s[NR-1] = - (2 * dr[NR-1] / r[NR-1])*(1 + dr[NR-1] / r[NR-1])
# Compress the vector s column-wise
s = csc_matrix(s)
```
<a id='id_conformal_factor'></a>
#### Step 1.c.ii.3 The conformal factor: $\psi$ \[Back to [top](#toc)\]
$$\label{id_conformal_factor}$$
We now use scipy to solve the sparse linear system of equations and determine the conformal factor $\psi$.
```
# Solve the sparse linear system using scipy
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.linalg.spsolve.html
from scipy.sparse.linalg import spsolve
psi = spsolve(A, s.T)
```
We then show useful plots of the conformal factor $\psi$ and of the *evolved conformal factors*
\begin{align}
\phi &= \log\psi\ ,\\
W &= \psi^{-2}\ ,\\
\chi &= \psi^{-4}\ .
\end{align}
```
import matplotlib.pyplot as plt
# Compute phi
phi = np.log(psi)
# Compute W
W = psi**(-2)
# Compute chi
chi = psi**(-4)
f = plt.figure(figsize=(12,8),dpi=100)
ax = f.add_subplot(221)
ax.set_title(r"Conformal factor $\psi(0,r)$")
ax.set_ylabel(r"$\psi(0,r)$")
ax.plot(r,psi,'k-')
ax.grid()
ax2 = f.add_subplot(222)
ax2.set_title(r"Evolved conformal factor $\phi(0,r)$")
ax2.set_ylabel(r"$\phi(0,r)$")
ax2.plot(r,phi,'r-')
ax2.grid()
ax3 = f.add_subplot(223)
ax3.set_title(r"Evolved conformal factor $W(0,r)$")
ax3.set_xlabel(r"$r$")
ax3.set_ylabel(r"$W(0,r)$")
ax3.plot(r,W,'b-')
ax3.grid()
ax4 = f.add_subplot(224)
ax4.set_title(r"Evolved conformal factor $\chi(0,r)$")
ax4.set_xlabel(r"$r$")
ax4.set_ylabel(r"$\chi(0,r)$")
ax4.plot(r,chi,'c-')
ax4.grid()
outfile = os.path.join(Ccodesdir,"cfs_scalarfield_id.png")
plt.savefig(outfile)
plt.close(f)
# Display the figure
from IPython.display import Image
Image(outfile)
```
<a id='id_lapse_function'></a>
## Step 1.d The lapse function: $\tilde\alpha$ \[Back to [top](#toc)\]
$$\label{id_lapse_function}$$
There are two common initial conditions for $\tilde\alpha$. The first one is eq. (44) of [A&C](https://arxiv.org/pdf/1508.01614.pdf), namely setting the lapse to unity
$$
\tilde\alpha = 1\ .
$$
```
# Set the unity lapse initial condition
alpha_unity = np.ones(NR)
```
The second one is discussed in the last paragraph of section II.B in [Baumgarte](https://arxiv.org/pdf/1807.10342.pdf), which is to set the "pre-collapsed lapse"
$$
\tilde\alpha = \psi^{-2}\ .
$$
```
# Set the "pre-collapsed lapse" initial condition
alpha_precollapsed = psi**(-2)
```
<a id='id_output'></a>
## Step 1.e Outputting the initial data to file \[Back to [top](#toc)\]
$$\label{id_output}$$
```
# Check to see which version of Python is being used
# For a machine running the final release of Python 3.7.1,
# sys.version_info should return the tuple [3,7,1,'final',0]
if sys.version_info[0] == 3:
np.savetxt(os.path.join(Ccodesdir,"outputSFID_unity_lapse.txt"), list(zip( r, ID_sf, psi**4, alpha_unity )),
fmt="%.15e")
np.savetxt(os.path.join(Ccodesdir,"outputSFID_precollapsed_lapse.txt"), list(zip( r, ID_sf, psi**4, alpha_precollapsed )),
fmt="%.15e")
elif sys.version_info[0] == 2:
np.savetxt(os.path.join(Ccodesdir,"outputSFID_unity_lapse.txt"), zip( r, ID_sf, psi**4, alpha_unity ),
fmt="%.15e")
np.savetxt(os.path.join(Ccodesdir,"outputSFID_precollapsed_lapse.txt"), zip( r, ID_sf, psi**4, alpha_precollapsed ),
fmt="%.15e")
```
<a id='id_interpolation_files'></a>
# Step 2: Interpolating the initial data file as needed \[Back to [top](#toc)\]
$$\label{id_interpolation_files}$$
In order to use the initial data file properly, we must tell the program how to interpolate the values we just computed to the values of $r$ in our numerical grid. We do this by creating two C functions: one that interpolates the ADM quantities, $\left\{\gamma_{ij},K_{ij},\alpha,\beta^{i},B^{i}\right\}$, and one that interpolates the scalar field quantities, $\left\{\varphi,\Pi\right\}$. The two files written below use the scalar_field_interpolate_1D( ) function, which is defined in the [ScalarField/ScalarField_interp.h](../edit/ScalarField/ScalarField_interp.h) file. This function performs a Lagrange polynomial interpolation between the initial data file and the numerical grid used during the simulation.
```
with open(os.path.join(Ccodesdir,"ID_scalar_field_ADM_quantities-validation.h"), "w") as file:
file.write("""
// This function takes as input either (x,y,z) or (r,th,ph) and outputs
// all ADM quantities in the Cartesian or Spherical basis, respectively.
void ID_scalar_field_ADM_quantities(
const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL sf_star,psi4_star,alpha_star;
scalar_field_interpolate_1D(r,
other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_arr,
other_inputs.sf_arr,
other_inputs.psi4_arr,
other_inputs.alpha_arr,
&sf_star,&psi4_star,&alpha_star);
// Update alpha
*alpha = alpha_star;
// gamma_{rr} = psi^4
*gammaDD00 = psi4_star;
// gamma_{thth} = psi^4 r^2
*gammaDD11 = psi4_star*r*r;
// gamma_{phph} = psi^4 r^2 sin^2(th)
*gammaDD22 = psi4_star*r*r*sin(th)*sin(th);
// All other quantities ARE ZERO:
*gammaDD01 = 0.0; *gammaDD02 = 0.0;
/**/ *gammaDD12 = 0.0;
*KDD00 = 0.0; *KDD01 = 0.0; *KDD02 = 0.0;
/**/ *KDD11 = 0.0; *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = 0.0; *betaU1 = 0.0; *betaU2 = 0.0;
*BU0 = 0.0; *BU1 = 0.0; *BU2 = 0.0;
}\n""")
with open(os.path.join(Ccodesdir,"ID_scalar_field_spherical-validation.h"), "w") as file:
file.write("""
void ID_scalarfield_spherical(
const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *sf, REAL *sfM) {
const REAL r = xyz_or_rthph[0];
const REAL th = xyz_or_rthph[1];
const REAL ph = xyz_or_rthph[2];
REAL sf_star,psi4_star,alpha_star;
scalar_field_interpolate_1D(r,
other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_arr,
other_inputs.sf_arr,
other_inputs.psi4_arr,
other_inputs.alpha_arr,
&sf_star,&psi4_star,&alpha_star);
// Update varphi
*sf = sf_star;
// Update Pi
*sfM = 0;
}\n""")
```
<a id='id_sph_to_curvilinear'></a>
# Step 3: Converting Spherical initial data to Curvilinear initial data \[Back to [top](#toc)\]
$$\label{id_sph_to_curvilinear}$$
In this tutorial module we have explained how to obtain spherically symmetric, time-symmetric initial data for the collapse of a massless scalar field in Spherical coordinates (see [Step 1](#initial_data)). We have also explained how to interpolate the initial data file to the numerical grid we will use during the simulation (see [Step 2](#id_interpolation_files)).
NRPy+ is capable of generating the BSSN evolution equations in many different Curvilinear coordinates (for example SinhSpherical coordinates, which are of particular interest for this problem). Therefore, it is essential that we convert the Spherical initial data generated here to any Curvilinear system supported by NRPy+.
We start by calling the reference_metric() function within the [reference_metric.py](../edit/reference_metric.py) NRPy+ module. This will set up a variety of useful quantities for us.
```
import reference_metric as rfm
import NRPy_param_funcs as par
# Set the Curvilinear reference system to SinhSpherical
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
# Call the reference_metric() function
rfm.reference_metric()
```
Then the code below interpolate the values of the Spherical grid $\left\{r,\theta,\phi\right\}$ to the Curvilinear grid $\left\{{\rm xx0,xx1,xx2}\right\}$.
```
from outputC import outputC
import sympy as sp
sfSphorCart, sfMSphorCart = sp.symbols("sfSphorCart sfMSphorCart")
pointer_to_ID_inputs=False
r_th_ph_or_Cart_xyz_oID_xx = []
CoordSystem_in = "Spherical"
if CoordSystem_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
else:
print("Error: Can only convert scalar field Spherical initial data to BSSN Curvilinear coords.")
sys.exit(1)
with open(os.path.join(Ccodesdir,"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h"), "w") as file:
file.write("void ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2(const paramstruct *restrict params,const REAL xx0xx1xx2[3],")
if pointer_to_ID_inputs == True:
file.write("ID_inputs *other_inputs,")
else:
file.write("ID_inputs other_inputs,")
file.write("""
REAL *restrict sf, REAL *restrict sfM ) {
#include \"set_Cparameters.h\"
REAL sfSphorCart,sfMSphorCart;
const REAL xx0 = xx0xx1xx2[0];
const REAL xx1 = xx0xx1xx2[1];
const REAL xx2 = xx0xx1xx2[2];
REAL xyz_or_rthph[3];\n""")
outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False"
outputC(r_th_ph_or_Cart_xyz_oID_xx[0:3], ["xyz_or_rthph[0]", "xyz_or_rthph[1]", "xyz_or_rthph[2]"],
os.path.join(Ccodesdir,"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h"), outCparams + ",CSE_enable=False")
with open(os.path.join(Ccodesdir,"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h"), "a") as file:
file.write("""ID_scalarfield_spherical(xyz_or_rthph, other_inputs,
&sfSphorCart, &sfMSphorCart);
// Next compute all rescaled BSSN curvilinear quantities:\n""")
outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False"
outputC([sfSphorCart, sfMSphorCart], ["*sf", "*sfM"],
os.path.join(Ccodesdir,"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h"), params=outCparams)
with open(os.path.join(Ccodesdir,"ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h"), "a") as file:
file.write("}\n")
```
Finally, we create the driver function which puts everything together using OpenMP.
```
import loop as lp
# Driver
with open(os.path.join(Ccodesdir,"ID_scalarfield-validation.h"), "w") as file:
file.write("""void ID_scalarfield(const paramstruct *restrict params,REAL *restrict xx[3],
ID_inputs other_inputs,REAL *restrict in_gfs) {
#include \"set_Cparameters.h\"\n""")
file.write(lp.loop(["i2", "i1", "i0"], ["0", "0", "0"],
["Nxx_plus_2NGHOSTS2", "Nxx_plus_2NGHOSTS1", "Nxx_plus_2NGHOSTS0"],
["1", "1", "1"], ["#pragma omp parallel for",
" const REAL xx2 = xx[2][i2];",
" const REAL xx1 = xx[1][i1];"], "",
"""const REAL xx0 = xx[0][i0];
const int idx = IDX3S(i0,i1,i2);
const REAL xx0xx1xx2[3] = {xx0,xx1,xx2};
ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2(params,xx0xx1xx2,other_inputs,
&in_gfs[IDX4ptS(SFGF,idx)],&in_gfs[IDX4ptS(SFMGF,idx)]);\n}\n"""))
```
<a id='validation'></a>
# Step 4: Validation of this tutorial against the [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py) module \[Back to [top](#toc)\]
$$\label{validation}$$
First we load the [ScalarField/ScalarField_InitialData.py](../edit/ScalarField/ScalarField_InitialData.py) module and compute everything by using the scalarfield_initial_data( ) function, which should do exactly the same as we have done in this tutorial.
```
# Import the ScalarField.ScalarField_InitialData NRPy module
import reference_metric as rfm
import NRPy_param_funcs as par
import ScalarField.ScalarField_InitialData as sfid
# Output the unity lapse initial data file
outputname = os.path.join(Ccodesdir,"outputSFID_unity_lapse-validation.txt")
sfid.ScalarField_InitialData(outputname, Ccodesdir,ID_Family,
phi0,r0,sigma,NR,RMAX,lapse_condition="Unity",
CoordSystem=CoordSystem,sinhA=sinhA,sinhW=sinhW)
# Output the "pre-collapsed" lapse initial data file
outputname = os.path.join(Ccodesdir,"outputSFID_precollapsed_lapse-validation.txt")
sfid.ScalarField_InitialData(outputname, Ccodesdir,ID_Family,
phi0,r0,sigma,NR,RMAX,lapse_condition="Pre-collapsed",
CoordSystem=CoordSystem,sinhA=sinhA,sinhW=sinhW)
import filecmp
if filecmp.cmp(os.path.join(Ccodesdir,'outputSFID_unity_lapse.txt'),
os.path.join(Ccodesdir,'outputSFID_unity_lapse-validation.txt')) == False:
print("ERROR: Unity lapse initial data test FAILED!")
sys.exit(1)
else:
print(" Unity lapse initial data test: PASSED!")
if filecmp.cmp(os.path.join(Ccodesdir,'outputSFID_precollapsed_lapse.txt'),
os.path.join(Ccodesdir,'outputSFID_precollapsed_lapse-validation.txt')) == False:
print("ERROR: \"Pre-collapsed\" lapse initial data test FAILED!")
sys.exit(1)
else:
print(" \"Pre-collapsed\" lapse initial data test: PASSED!")
if filecmp.cmp(os.path.join(Ccodesdir,'ID_scalar_field_ADM_quantities.h'),
os.path.join(Ccodesdir,'ID_scalar_field_ADM_quantities-validation.h')) == False:
print("ERROR: ADM quantities interpolation file test FAILED!")
sys.exit(1)
else:
print(" ADM quantities interpolation file test: PASSED!")
if filecmp.cmp(os.path.join(Ccodesdir,'ID_scalar_field_spherical.h'),
os.path.join(Ccodesdir,'ID_scalar_field_spherical-validation.h')) == False:
print("ERROR: Scalar field interpolation file test FAILED!")
sys.exit(1)
else:
print(" Scalar field interpolation file test: PASSED!")
if filecmp.cmp(os.path.join(Ccodesdir,'ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2.h'),
os.path.join(Ccodesdir,'ID_scalarfield_xx0xx1xx2_to_BSSN_xx0xx1xx2-validation.h')) == False:
print("ERROR: Scalar field Spherical to Curvilinear test FAILED!")
sys.exit(1)
else:
print("Scalar field Spherical to Curvilinear test: PASSED!")
if filecmp.cmp(os.path.join(Ccodesdir,'ID_scalarfield.h'),
os.path.join(Ccodesdir,'ID_scalarfield-validation.h')) == False:
print("ERROR: Scalar field driver test: FAILED!")
sys.exit(1)
else:
print(" Scalar field driver test: PASSED!")
```
<a id='output_to_pdf'></a>
# Step 5: Output this module as $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{output_to_pdf}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ADM_Initial_Data-ScalarField.pdf](Tutorial-ADM_Initial_Data-ScalarField.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ADM_Initial_Data-ScalarField")
```
| github_jupyter |
# Demo: Compressing ground states of molecular hydrogen
In this demo, we will try to compress ground states of molecular hydrogen at various bond lengths. We start with expressing each state using 4 qubits and try to compress the information to 1 qubit (i.e. implement a 4-1-4 quantum autoencoder).
In the following section, we review both the full and halfway training schemes. However, in the notebook, we execute the halfway training case.
## Full cost function training
The QAE circuit for full cost function training looks like the following:
<img src="../images/qae_setup.png" width="400">
We note that our setup is different from what was proposed in the original paper. As shown in the figure above, we use 7 total qubits for the 4-1-4 autoencoder, using the last 3 qubits (qubits $q_6$, $q_5$, and $q_4$) as refresh qubits. The unitary $S$ represents the state preparation circuit, gates implemented to produce the input data set. The unitary $U$ represents the training circuit that will be responsible for representing the data set using a fewer number of qubits, in this case using a single qubit. The tilde symbol above the daggered operations indicates that the qubit indexing has been adjusted such that $q_0 \rightarrow q_6$, $q_1 \rightarrow q_5$, $q_2 \rightarrow q_4$, and $q_3 \rightarrow q_3$. For clarity, refer to the figure below for an equivalent circuit with the refresh qubits moved around. So qubit $q_3$ is to be the "latent space qubit," or qubit to hold the compressed information. Using the circuit structure above (applying $S$ and $U$ then effectively __un__-applying $S$ and $U$), we train the autoencoder by propagating the QAE circuit with proposed parameters and computing the probability of obtaining measurements of 0000 for the latent space and refresh qubits ($q_3$ to $q_6$). We negate this value for casting as a minimization problem and average over the training set to compute a single loss value.
<img src="../images/qae_adjusted.png" width="500">
## Halfway cost function training
In the halfway cost function training case, the circuit looks like the following:
<img src="../images/h2_halfway_circuit.png" width="300">
Here, the cost function is the negated probability of obtaining the measurement 000 for the trash qubits.
## Demo Outline
We break down this demo into the following steps:
1. Preparation of the quantum data
2. Initializing the QAE
2. Setting up the Forest connection
3. Dividing the dataset into training and test sets
4. Training the network
5. Testing the network
__NOTE__: While the QCompress framework was developed to execute on both the QVM and the QPU (i.e. simulator and quantum device, respectively), this particular demo runs a simulation (i.e. uses QVM).
Let us begin! __Note that this tutorial requires installation of [OpenFermion](https://github.com/quantumlib/OpenFermion)!__
```
# Import modules
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from openfermion.hamiltonians import MolecularData
from openfermion.transforms import get_sparse_operator, jordan_wigner
from openfermion.utils import get_ground_state
from pyquil.api import WavefunctionSimulator
from pyquil.gates import *
from pyquil import Program
import demo_utils
from qcompress.qae_engine import *
from qcompress.utils import *
from qcompress.config import DATA_DIRECTORY
global pi
pi = np.pi
```
## QAE Settings
In the cell below, we enter the settings for the QAE.
__NOTE__: Because QCompress was designed to run on the quantum device (as well as the simulator), we need to anticipate nontrival mappings between abstract qubits and physical qubits. The dictionaries `q_in`, `q_latent`, and `q_refresh` are abstract-to-physical qubit mappings for the input, latent space, and refresh qubits respectively. A cool plug-in/feature to add would be to have an automated "qubit mapper" to determine the optimal or near-optimal abstract-to-physical qubit mappings for a particular QAE instance.
In this simulation, we will skip the circuit compilation step by turning off `compile_program`.
```
### QAE setup options
# Abstract-to-physical qubit mapping
q_in = {'q0': 0, 'q1': 1, 'q2': 2, 'q3': 3} # Input qubits
q_latent = {'q3': 3} # Latent space qubits
q_refresh = None
# Training scheme: Halfway
trash_training = True
# Simulator settings
cxn_setting = '4q-qvm'
compile_program = False
n_shots = 3000
```
## Qubit labeling
In the cell below, we produce lists of __ordered__ physical qubit indices involved in the compression and recovery maps of the quantum autoencoder. Depending on the training and reset schemes, we may use different qubits for the compression vs. recovery.
Since we're employing the halfway training scheme, we don't need to assign the qubit labels for the recovery process.
```
compression_indices = order_qubit_labels(q_in).tolist()
if not trash_training:
q_out = merge_two_dicts(q_latent, q_refresh)
recovery_indices = order_qubit_labels(q_out).tolist()
if not reset:
recovery_indices = recovery_indices[::-1]
print("Physical qubit indices for compression : {0}".format(compression_indices))
```
# Generating input data
We use routines from `OpenFermion`, `forestopenfermion`, and `grove` to generate the input data set. We've provided the molecular data files for you, which were generated using `OpenFermion`'s plugin `OpenFermion-Psi4`.
```
qvm = WavefunctionSimulator()
# MolecularData settings
molecule_name = "H2"
basis = "sto-3g"
multiplicity = "singlet"
dist_list = np.arange(0.2, 4.2, 0.1)
# Lists to store HF and FCI energies
hf_energies = []
fci_energies = []
check_energies = []
# Lists to store state preparation circuits
list_SP_circuits = []
list_SP_circuits_dag = []
for dist in dist_list:
# Fetch file path
dist = "{0:.1f}".format(dist)
file_path = os.path.join(DATA_DIRECTORY, "{0}_{1}_{2}_{3}.hdf5".format(molecule_name,
basis,
multiplicity,
dist))
# Extract molecular info
molecule = MolecularData(filename=file_path)
n_qubits = molecule.n_qubits
hf_energies.append(molecule.hf_energy)
fci_energies.append(molecule.fci_energy)
molecular_ham = molecule.get_molecular_hamiltonian()
# Set up hamiltonian in qubit basis
qubit_ham = jordan_wigner(molecular_ham)
# Convert from OpenFermion's to PyQuil's data type (QubitOperator to PauliTerm/PauliSum)
qubit_ham_pyquil = demo_utils.qubitop_to_pyquilpauli(qubit_ham)
# Sanity check: Obtain ground state energy and check with MolecularData's FCI energy
molecular_ham_sparse = get_sparse_operator(operator=molecular_ham, n_qubits=n_qubits)
ground_energy, ground_state = get_ground_state(molecular_ham_sparse)
assert np.isclose(molecule.fci_energy, ground_energy)
# Generate unitary to prepare ground states
state_prep_unitary = demo_utils.create_arbitrary_state(
ground_state,
qubits=compression_indices)
if not trash_training:
if reset:
# Generate daggered state prep unitary (WITH NEW/ADJUSTED INDICES!)
state_prep_unitary_dag = demo_utils.create_arbitrary_state(
ground_state,
qubits=compression_indices).dagger()
else:
# Generate daggered state prep unitary (WITH NEW/ADJUSTED INDICES!)
state_prep_unitary_dag = demo_utils.create_arbitrary_state(
ground_state,
qubits=recovery_indices).dagger()
# Sanity check: Compute energy wrt wavefunction evolved under state_prep_unitary
wfn = qvm.wavefunction(state_prep_unitary)
ket = wfn.amplitudes
bra = np.transpose(np.conjugate(wfn.amplitudes))
ham_matrix = molecular_ham_sparse.toarray()
energy_expectation = np.dot(bra, np.dot(ham_matrix, ket))
check_energies.append(energy_expectation)
# Store circuits
list_SP_circuits.append(state_prep_unitary)
if not trash_training:
list_SP_circuits_dag.append(state_prep_unitary_dag)
```
### Try plotting the energies of the input data set.
To (visually) check our state preparation circuits, we run these circuits and plot the energies. The "check" energies overlay nicely with the FCI energies.
```
imag_components = np.array([E.imag for E in check_energies])
assert np.isclose(imag_components, np.zeros(len(imag_components))).all()
check_energies = [E.real for E in check_energies]
plt.plot(dist_list, fci_energies, 'ko-', markersize=6, label='FCI')
plt.plot(dist_list, check_energies, 'ro', markersize=4, label='Check energies')
plt.title("Dissociation Profile, $H_2$")
plt.xlabel("Bond Length, Angstroms")
plt.ylabel("Energy, Hartrees")
plt.legend()
plt.show()
```
## Training circuit for QAE
Now we want to choose a parametrized circuit with which we hope to train to compress the input quantum data set.
For this demonstration, we use a simple two-parameter circuit, as shown below.
__NOTE__: For more general data sets (and general circuits), we may need to run multiple instances of the QAE with different initial guesses to find a good compression circuit.
```
def _training_circuit(theta, qubit_indices):
"""
Returns parametrized/training circuit.
:param theta: (list or numpy.array, required) Vector of training parameters
:param qubit_indices: (list, required) List of qubit indices
:returns: Training circuit
:rtype: pyquil.quil.Program
"""
circuit = Program()
circuit.inst(Program(RX(theta[0], qubit_indices[2]),
RX(theta[1], qubit_indices[3])))
circuit.inst(Program(CNOT(qubit_indices[2], qubit_indices[0]),
CNOT(qubit_indices[3], qubit_indices[1]),
CNOT(qubit_indices[3], qubit_indices[2])))
return circuit
def _training_circuit_dag(theta, qubit_indices):
"""
Returns the daggered parametrized/training circuit.
:param theta: (list or numpy.array, required) Vector of training parameters
:param qubit_indices: (list, required) List of qubit indices
:returns: Daggered training circuit
:rtype: pyquil.quil.Program
"""
circuit = Program()
circuit.inst(Program(CNOT(qubit_indices[3], qubit_indices[2]),
CNOT(qubit_indices[3], qubit_indices[1]),
CNOT(qubit_indices[2], qubit_indices[0])))
circuit.inst(Program(RX(-theta[1], qubit_indices[3]),
RX(-theta[0], qubit_indices[2])))
return circuit
training_circuit = lambda param : _training_circuit(param, compression_indices)
if not trash_training:
if reset:
training_circuit_dag = lambda param : _training_circuit_dag(param,
compression_indices)
else:
training_circuit_dag = lambda param : _training_circuit_dag(param,
recovery_indices)
```
## Initialize the QAE engine
Here we create an instance of the `quantum_autoencoder` class.
Leveraging the features of the `Forest` platform, this quantum autoencoder "engine" allows you to run a noisy version of the QVM to get a sense of how the autoencoder performs under noise (but qvm is noiseless in this demo). In addition, the user can also run this instance on the quantum device (assuming the user is given access to one of Rigetti Computing's available QPUs).
```
qae = quantum_autoencoder(state_prep_circuits=list_SP_circuits,
training_circuit=training_circuit,
q_in=q_in,
q_latent=q_latent,
q_refresh=q_refresh,
trash_training=trash_training,
compile_program=compile_program,
n_shots=n_shots,
print_interval=1)
```
After defining the instance, we set up the Forest connection (in this case, a simulator).
```
qae.setup_forest_cxn(cxn_setting)
```
Let's split the data set into training and test set. If we don't input the argument `train_indices`, the data set will be randomly split. However, knowing our quantum data set, we may want to choose various regions along the PES (the energy curve shown above) to train the entire function. Here, we pick 6 out of 40 data points for our training set.
```
qae.train_test_split(train_indices=[3, 10, 15, 20, 30, 35])
```
Let's print some information about the QAE instance.
```
print(qae)
```
## Training
The autoencoder is trained in the cell below, where the default optimization algorithm is Constrained Optimization BY Linear Approximation (COBYLA). The lowest possible mean loss value is -1.000.
```
%%time
initial_guess = [pi/2., 0.]
avg_loss_train = qae.train(initial_guess)
```
### Printing the optimized parameters
```
print(qae.optimized_params)
```
### Plot training losses
```
fig = plt.figure(figsize=(6, 4))
plt.plot(qae.train_history, 'o-', linewidth=1)
plt.title("Training Loss", fontsize=16)
plt.xlabel("Function Evaluation",fontsize=20)
plt.ylabel("Loss Value", fontsize=20)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.show()
```
## Testing
Now test the optimized network against the rest of the data set (i.e. use the optimized parameters to try to compress then recover each test data point).
```
avg_loss_test = qae.predict()
```
| github_jupyter |
## Imports
```
import random
import pandas as pd
import torch
from torchvision import datasets, transforms
#quanutm lib
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer
import torch
from torchvision import datasets, transforms
import sys
sys.path.append("..") # Adds higher directory to python modules path
from qencode.initialize import setAB_amplitude, setAux, setEnt
from qencode.encoders import e2_classic
from qencode.training_circuits import swap_t
from qencode.qubits_arrangement import QubitsArrangement
from qencode.utils.mnist import get_dataset
```
## Data
```
df=pd.read_csv("cancer.csv", nrows=500)
df.head()
df.info()
df.describe()
#Data seams pretty clean without any nan value
## engineering two new features to have 32 feutures that can be encoded om 5 qubits.
over_average = []
under_average = []
mean = {}
std = {}
for col in df:
if col not in ["id","diagnosis" ]:
mean[col]=df[col].mean()
std[col]=df[col].std()
for index,row in df.iterrows():
o_average=0
u_average=0
for col in df:
if col not in ["id","diagnosis" ]:
if row[col]> mean[col]+2* std[col]:
o_average = o_average + 1
if row[col]< mean[col]+2* std[col]:
u_average= u_average + 1
over_average.append(o_average)
under_average.append(u_average)
df["over_average"] = over_average
df["under_average"] = under_average
df.head()
df.describe()
for col in df:
if col not in ["id","diagnosis" ]:
df[col]=df[col]/df[col].max()
df.describe()
malign=df[df["diagnosis"]=="M"]
malign.head()
benign=df[df["diagnosis"]!="M"]
benign.head()
malign.drop(["id","diagnosis","Unnamed: 32"],axis="columns", inplace=True)
benign.drop(["id","diagnosis","Unnamed: 32"],axis="columns", inplace=True)
malign.head()
input_data=benign.to_numpy()
input_data
```
## Training node
```
shots = 2500
nr_trash=1
nr_latent=4
nr_ent=0
spec = QubitsArrangement(nr_trash, nr_latent, nr_swap=1, nr_ent=nr_ent)
print("Qubits:", spec.qubits)
#set up the device
dev = qml.device("default.qubit", wires=spec.num_qubits)
@qml.qnode(dev)
def training_circuit_example(init_params, encoder_params, reinit_state):
#initilaization
setAB_amplitude(spec, init_params)
setAux(spec, reinit_state)
setEnt(spec, inputs=[1 / np.sqrt(2), 0, 0, 1 / np.sqrt(2)])
#encoder
for params in encoder_params:
e2_classic(params, [*spec.latent_qubits, *spec.trash_qubits])
#swap test
swap_t(spec)
return [qml.probs(i) for i in spec.swap_qubits]
```
## Training parameters
```
epochs = 500
learning_rate = 0.0003
batch_size = 2
num_samples = 0.8 # proportion of the data used for training
beta1 = 0.9
beta2 = 0.999
opt = AdamOptimizer(learning_rate, beta1=beta1, beta2=beta2)
def fid_func(output):
# Implemented as the Fidelity Loss
# output[0] because we take the probability that the state after the
# SWAP test is ket(0), like the reference state
fidelity_loss = 1 / output[0]
return fidelity_loss
def cost(encoder_params, X):
reinit_state = [0 for i in range(2 ** len(spec.aux_qubits))]
reinit_state[0] = 1.0
loss = 0.0
for x in X:
output = training_circuit_example(init_params=x[0], encoder_params=encoder_params, reinit_state=reinit_state)[0]
f = fid_func(output)
loss = loss + f
return loss / len(X)
def fidelity(encoder_params, X):
reinit_state = [0 for i in range(2 ** len(spec.aux_qubits))]
reinit_state[0] = 1.0
loss = 0.0
for x in X:
output = training_circuit_example(init_params=x[0], encoder_params=encoder_params, reinit_state=reinit_state)[0]
f = output[0]
loss = loss + f
return loss / len(X)
def iterate_batches(X, batch_size):
random.shuffle(X)
batch_list = []
batch = []
for x in X:
if len(batch) < batch_size:
batch.append(x)
else:
batch_list.append(batch)
batch = []
if len(batch) != 0:
batch_list.append(batch)
return batch_list
training_data = [ torch.tensor([input_data[i]]) for i in range(int(len(input_data)*num_samples))]
test_data = [torch.tensor([input_data[i]]) for i in range(int(len(input_data)*num_samples),len(input_data))]
training_data[0]
X_training = training_data
X_tes = test_data
# initialize random encoder parameters
nr_encod_qubits = len(spec.trash_qubits) + len(spec.latent_qubits)
nr_par_encoder = 15 * int(nr_encod_qubits*(nr_encod_qubits-1)/2)
encoder_params = np.random.uniform(size=(1, nr_par_encoder), requires_grad=True)
```
### training
```
np_malign = malign.to_numpy()
malign_data = [ torch.tensor([np_malign[i]]) for i in range(len(malign.to_numpy()))]
loss_hist=[]
fid_hist=[]
loss_hist_test=[]
fid_hist_test=[]
benign_fid=[]
for epoch in range(epochs):
batches = iterate_batches(X=training_data, batch_size=batch_size)
for xbatch in batches:
encoder_params = opt.step(cost, encoder_params, X=xbatch)
if epoch%5 == 0:
loss_training = cost(encoder_params, X_training )
fidel = fidelity(encoder_params, X_training )
loss_hist.append(loss_training)
fid_hist.append(fidel)
print("Epoch:{} | Loss:{} | Fidelity:{}".format(epoch, loss_training, fidel))
loss_test = cost(encoder_params, X_tes )
fidel = fidelity(encoder_params, X_tes )
loss_hist_test.append(loss_test)
fid_hist_test.append(fidel)
print("Test-Epoch:{} | Loss:{} | Fidelity:{}".format(epoch, loss_test, fidel))
b_fidel = fidelity(encoder_params, malign_data )
benign_fid.append(b_fidel)
print("malign fid:{}".format(b_fidel))
experiment_parameters={"autoencoder":"e2","params":encoder_params}
f=open("Cancer_encoder_e2-Benign/params"+str(epoch)+".txt","w")
f.write(str(experiment_parameters))
f.close()
```
## Rezults
```
import matplotlib.pyplot as plt
maligig = plt.figure()
plt.plot([x for x in range(0,len(loss_hist)*5,5)],np.array(fid_hist),label="train fid")
plt.plot([x for x in range(0,len(loss_hist)*5,5)],np.array(fid_hist_test),label="test fid")
plt.plot([x for x in range(0,len(loss_hist)*5,5)],np.array(benign_fid),label="malign fid")
plt.legend()
plt.title("Malign 5-1-5->compression fidelity e2",)
plt.xlabel("epoch")
plt.ylabel("fid")
print("fidelity:",fid_hist[-1])
fig = plt.figure()
plt.plot([x for x in range(0,len(loss_hist)*5,5)],np.array(loss_hist),label="train loss")
plt.plot([x for x in range(0,len(loss_hist)*5,5)],np.array(loss_hist_test),label="test loss")
plt.legend()
plt.title("Malign 5-1-5->compression loss e2",)
plt.xlabel("epoch")
plt.ylabel("loss")
print("loss:",loss_hist[-1])
```
## Benign performance
```
np_malign = malign.to_numpy()
malign_data = [ torch.tensor([np_malign[i]]) for i in range(len(malign.to_numpy()))]
loss = cost(encoder_params, malign_data )
fidel = fidelity(encoder_params, malign_data )
print("Benign results:")
print("fidelity=",fidel)
print("loss=",loss)
```
## Classifyer
```
malign_flist=[]
for b in malign_data:
f=fidelity(encoder_params, [b])
malign_flist.append(f.item())
print(min(malign_flist))
print(max(malign_flist))
np_benign= benign.to_numpy()
benign_data = [ torch.tensor([np_benign[i]]) for i in range(len(benign.to_numpy()))]
benign_flist=[]
for b in benign_data:
f=fidelity(encoder_params, [b])
benign_flist.append(f.item())
print(min(benign_flist))
print(max(benign_flist))
plt.hist(benign_flist, bins = 100 ,label="benign", color = "skyblue",alpha=0.4)
plt.hist(malign_flist, bins =100 ,label="malign",color = "red",alpha=0.4)
plt.title("Compression fidelity",)
plt.legend()
plt.show()
split=0.995
print("split:",split)
b_e=[]
for i in benign_flist:
if i<split:
b_e.append(1)
else:
b_e.append(0)
ab_ac=sum(b_e)/len(b_e)
print("malign classification accuracy:",ab_ac)
m_e=[]
for i in malign_flist:
if i>split:
m_e.append(1)
else:
m_e.append(0)
am_ac=sum(m_e)/len(m_e)
print("benign classification accuracy:",am_ac)
t_ac=(sum(b_e)+sum(m_e))/(len(b_e)+len(m_e))
print("total accuracy:",t_ac)
```
| github_jupyter |
# Export and run models with ONNX
_This notebook is part of a tutorial series on [txtai](https://github.com/neuml/txtai), an AI-powered semantic search platform._
The [ONNX runtime](https://onnx.ai/) provides a common serialization format for machine learning models. ONNX supports a number of [different platforms/languages](https://onnxruntime.ai/docs/how-to/install.html#requirements) and has features built in to help reduce inference time.
PyTorch has robust support for exporting Torch models to ONNX. This enables exporting Hugging Face Transformer and/or other downstream models directly to ONNX.
ONNX opens an avenue for direct inference using a number of languages and platforms. For example, a model could be run directly on Android to limit data sent to a third party service. ONNX is an exciting development with a lot of promise. Microsoft has also released [Hummingbird](https://github.com/microsoft/hummingbird) which enables exporting traditional models (sklearn, decision trees, logistical regression..) to ONNX.
This notebook will cover how to export models to ONNX using txtai. These models will then be directly run in Python, JavaScript, Java and Rust. Currently, txtai supports all these languages through it's API and that is still the recommended approach.
# Install dependencies
Install `txtai` and all dependencies. Since this notebook uses ONNX quantization, we need to install the pipeline extras package.
```
%%capture
!pip install datasets git+https://github.com/neuml/txtai#egg=txtai[pipeline]
```
# Run a model with ONNX
Let's get right to it! The following example exports a sentiment analysis model to ONNX and runs an inference session.
```
import numpy as np
from onnxruntime import InferenceSession, SessionOptions
from transformers import AutoTokenizer
from txtai.pipeline import HFOnnx
# Normalize logits using sigmoid function
sigmoid = lambda x: 1.0 / (1.0 + np.exp(-x))
# Export to ONNX
onnx = HFOnnx()
model = onnx("distilbert-base-uncased-finetuned-sst-2-english", "text-classification")
# Start inference session
options = SessionOptions()
session = InferenceSession(model, options)
# Tokenize
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
tokens = tokenizer(["I am happy", "I am mad"], return_tensors="np")
# Print results
outputs = session.run(None, dict(tokens))
print(sigmoid(outputs[0]))
```
And just like that, there are results! The text classification model is judging sentiment using two labels, 0 for negative to 1 for positive. The results above shows the probability of each label per text snippet.
The ONNX pipeline loads the model, converts the graph to ONNX and returns. Note that no output file was provided, in this case the ONNX model is returned as a byte array. If an output file is provided, this method returns the output path.
# Train and Export a model for Text Classification
Next we'll combine the ONNX pipeline with a Trainer pipeline to create a "train and export to ONNX" workflow.
```
from datasets import load_dataset
from txtai.pipeline import HFTrainer
trainer = HFTrainer()
# Hugging Face dataset
ds = load_dataset("glue", "sst2")
data = ds["train"].select(range(5000)).flatten_indices()
# Train new model using 5,000 SST2 records (in-memory)
model, tokenizer = trainer("google/electra-base-discriminator", data, columns=("sentence", "label"))
# Export model trained in-memory to ONNX (still in-memory)
output = onnx((model, tokenizer), "text-classification", quantize=True)
# Start inference session
options = SessionOptions()
session = InferenceSession(output, options)
# Tokenize
tokens = tokenizer(["I am happy", "I am mad"], return_tensors="np")
# Print results
outputs = session.run(None, dict(tokens))
print(sigmoid(outputs[0]))
```
The results are similar to the previous step, although this model is only trained on a fraction of the sst2 dataset. Lets save this model for later.
```
text = onnx((model, tokenizer), "text-classification", "text-classify.onnx", quantize=True)
```
# Export a Sentence Embeddings model
The ONNX pipeline also supports exporting sentence embeddings models trained with the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package.
```
embeddings = onnx("sentence-transformers/paraphrase-MiniLM-L6-v2", "pooling", "embeddings.onnx", quantize=True)
```
Now let's run the model with ONNX.
```
from sklearn.metrics.pairwise import cosine_similarity
options = SessionOptions()
session = InferenceSession(embeddings, options)
tokens = tokenizer(["I am happy", "I am glad"], return_tensors="np")
outputs = session.run(None, dict(tokens))[0]
print(cosine_similarity(outputs))
```
The code above tokenizes two separate text snippets ("I am happy" and "I am glad") and runs it through the ONNX model.
This outputs two embeddings arrays and those arrays are compared using cosine similarity. As we can see, the two text snippets have close semantic meaning.
# Load an ONNX model with txtai
txtai has built-in support for ONNX models. Loading an ONNX model is seamless and Embeddings and Pipelines support it. The following section shows how to load a classification pipeline and embeddings model backed by ONNX.
```
from txtai.embeddings import Embeddings
from txtai.pipeline import Labels
labels = Labels(("text-classify.onnx", "google/electra-base-discriminator"), dynamic=False)
print(labels(["I am happy", "I am mad"]))
embeddings = Embeddings({"path": "embeddings.onnx", "tokenizer": "sentence-transformers/paraphrase-MiniLM-L6-v2"})
print(embeddings.similarity("I am happy", ["I am glad"]))
```
# JavaScript
So far, we've exported models to ONNX and run them through Python. This already has a lot of advantages, which include fast inference times, quantization and less software dependencies. But ONNX really shines when we run a model trained in Python in other languages/platforms.
Let's try running the models trained above in JavaScript. First step is getting the Node.js environment and dependencies setup.
```
%%capture
import os
!mkdir js
os.chdir("/content/js")
%%writefile package.json
{
"name": "onnx-test",
"private": true,
"version": "1.0.0",
"description": "ONNX Runtime Node.js test",
"main": "index.js",
"dependencies": {
"onnxruntime-node": ">=1.8.0",
"tokenizers": "file:tokenizers/bindings/node"
}
}
%%capture
# Copy ONNX models
!cp ../text-classify.onnx .
!cp ../embeddings.onnx .
# Save copy of Bert Tokenizer
tokenizer.save_pretrained("bert")
# Get tokenizers project
!git clone https://github.com/huggingface/tokenizers.git
os.chdir("/content/js/tokenizers/bindings/node")
# Install Rust
!apt-get install rustc
# Build tokenizers project locally as version on NPM isn't working properly for latest version of Node.js
!npm install --also=dev
!npm run dev
# Install all dependencies
os.chdir("/content/js")
!npm install
```
Next we'll write the inference code in JavaScript to an index.js file.
```
#@title
%%writefile index.js
const ort = require('onnxruntime-node');
const { promisify } = require('util');
const { Tokenizer } = require("tokenizers/dist/bindings/tokenizer");
function sigmoid(data) {
return data.map(x => 1 / (1 + Math.exp(-x)))
}
function softmax(data) {
return data.map(x => Math.exp(x) / (data.map(y => Math.exp(y))).reduce((a,b) => a+b))
}
function similarity(v1, v2) {
let dot = 0.0;
let norm1 = 0.0;
let norm2 = 0.0;
for (let x = 0; x < v1.length; x++) {
dot += v1[x] * v2[x];
norm1 += Math.pow(v1[x], 2);
norm2 += Math.pow(v2[x], 2);
}
return dot / (Math.sqrt(norm1) * Math.sqrt(norm2));
}
function tokenizer(path) {
let tokenizer = Tokenizer.fromFile(path);
return promisify(tokenizer.encode.bind(tokenizer));
}
async function predict(session, text) {
try {
// Tokenize input
let encode = tokenizer("bert/tokenizer.json");
let output = await encode(text);
let ids = output.getIds().map(x => BigInt(x))
let mask = output.getAttentionMask().map(x => BigInt(x))
let tids = output.getTypeIds().map(x => BigInt(x))
// Convert inputs to tensors
let tensorIds = new ort.Tensor('int64', BigInt64Array.from(ids), [1, ids.length]);
let tensorMask = new ort.Tensor('int64', BigInt64Array.from(mask), [1, mask.length]);
let tensorTids = new ort.Tensor('int64', BigInt64Array.from(tids), [1, tids.length]);
let inputs = null;
if (session.inputNames.length > 2) {
inputs = { input_ids: tensorIds, attention_mask: tensorMask, token_type_ids: tensorTids};
}
else {
inputs = { input_ids: tensorIds, attention_mask: tensorMask};
}
return await session.run(inputs);
} catch (e) {
console.error(`failed to inference ONNX model: ${e}.`);
}
}
async function main() {
let args = process.argv.slice(2);
if (args.length > 1) {
// Run sentence embeddings
const session = await ort.InferenceSession.create('./embeddings.onnx');
let v1 = await predict(session, args[0]);
let v2 = await predict(session, args[1]);
// Unpack results
v1 = v1.embeddings.data;
v2 = v2.embeddings.data;
// Print similarity
console.log(similarity(Array.from(v1), Array.from(v2)));
}
else {
// Run text classifier
const session = await ort.InferenceSession.create('./text-classify.onnx');
let results = await predict(session, args[0]);
// Normalize results using softmax and print
console.log(softmax(results.logits.data));
}
}
main();
```
## Run Text Classification in JavaScript with ONNX
```
!node . "I am happy"
!node . "I am mad"
```
First off, have to say this is 🔥🔥🔥! Just amazing that this model can be fully run in JavaScript. It's a great time to be in NLP!
The steps above installed a JavaScript environment with dependencies to run ONNX and tokenize data in JavaScript. The text classification model previously created is loaded into the JavaScript ONNX runtime and inference is run.
As a reminder, the text classification model is judging sentiment using two labels, 0 for negative to 1 for positive. The results above shows the probability of each label per text snippet.
## Build sentence embeddings and compare similarity in JavaScript with ONNX
```
!node . "I am happy", "I am glad"
```
Once again....wow!! The sentence embeddings model produces vectors that can be used to compare semantic similarity, -1 being most dissimilar and 1 being most similar.
While the results don't match the exported model exactly, it's very close. Worth mentioning again that this is 100% JavaScript, no API or remote calls, all within node.
# Java
Let's try the same thing with Java. The following sections initialize a Java build environment and writes out the code necessary to run the ONNX inference.
```
%%capture
import os
os.chdir("/content")
!mkdir java
os.chdir("/content/java")
# Copy ONNX models
!cp ../text-classify.onnx .
!cp ../embeddings.onnx .
# Save copy of Bert Tokenizer
tokenizer.save_pretrained("bert")
!mkdir -p src/main/java
# Install gradle
!wget https://services.gradle.org/distributions/gradle-7.2-bin.zip
!unzip -o gradle-7.2-bin.zip
!gradle-7.2/bin/gradle wrapper
%%writefile build.gradle
apply plugin: "java"
repositories {
mavenCentral()
}
dependencies {
implementation "com.robrua.nlp:easy-bert:1.0.3"
implementation "com.microsoft.onnxruntime:onnxruntime:1.8.1"
}
java {
toolchain {
languageVersion = JavaLanguageVersion.of(8)
}
}
jar {
archiveBaseName = "onnxjava"
}
task onnx(type: JavaExec) {
description = "Runs ONNX demo"
classpath = sourceSets.main.runtimeClasspath
main = "OnnxDemo"
}
#@title
%%writefile src/main/java/OnnxDemo.java
import java.io.File;
import java.nio.LongBuffer;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import ai.onnxruntime.OnnxTensor;
import ai.onnxruntime.OrtEnvironment;
import ai.onnxruntime.OrtSession;
import ai.onnxruntime.OrtSession.Result;
import com.robrua.nlp.bert.FullTokenizer;
class Tokens {
public long[] ids;
public long[] mask;
public long[] types;
}
class Tokenizer {
private FullTokenizer tokenizer;
public Tokenizer(String path) {
File vocab = new File(path);
this.tokenizer = new FullTokenizer(vocab, true);
}
public Tokens tokenize(String text) {
// Build list of tokens
List<String> tokensList = new ArrayList();
tokensList.add("[CLS]");
tokensList.addAll(Arrays.asList(tokenizer.tokenize(text)));
tokensList.add("[SEP]");
int[] ids = tokenizer.convert(tokensList.toArray(new String[0]));
Tokens tokens = new Tokens();
// input ids
tokens.ids = Arrays.stream(ids).mapToLong(i -> i).toArray();
// attention mask
tokens.mask = new long[ids.length];
Arrays.fill(tokens.mask, 1);
// token type ids
tokens.types = new long[ids.length];
Arrays.fill(tokens.types, 0);
return tokens;
}
}
class Inference {
private Tokenizer tokenizer;
private OrtEnvironment env;
private OrtSession session;
public Inference(String model) throws Exception {
this.tokenizer = new Tokenizer("bert/vocab.txt");
this.env = OrtEnvironment.getEnvironment();
this.session = env.createSession(model, new OrtSession.SessionOptions());
}
public float[][] predict(String text) throws Exception {
Tokens tokens = this.tokenizer.tokenize(text);
Map<String, OnnxTensor> inputs = new HashMap<String, OnnxTensor>();
inputs.put("input_ids", OnnxTensor.createTensor(env, LongBuffer.wrap(tokens.ids), new long[]{1, tokens.ids.length}));
inputs.put("attention_mask", OnnxTensor.createTensor(env, LongBuffer.wrap(tokens.mask), new long[]{1, tokens.mask.length}));
inputs.put("token_type_ids", OnnxTensor.createTensor(env, LongBuffer.wrap(tokens.types), new long[]{1, tokens.types.length}));
return (float[][])session.run(inputs).get(0).getValue();
}
}
class Vectors {
public static double similarity(float[] v1, float[] v2) {
double dot = 0.0;
double norm1 = 0.0;
double norm2 = 0.0;
for (int x = 0; x < v1.length; x++) {
dot += v1[x] * v2[x];
norm1 += Math.pow(v1[x], 2);
norm2 += Math.pow(v2[x], 2);
}
return dot / (Math.sqrt(norm1) * Math.sqrt(norm2));
}
public static float[] softmax(float[] input) {
double[] t = new double[input.length];
double sum = 0.0;
for (int x = 0; x < input.length; x++) {
double val = Math.exp(input[x]);
sum += val;
t[x] = val;
}
float[] output = new float[input.length];
for (int x = 0; x < output.length; x++) {
output[x] = (float) (t[x] / sum);
}
return output;
}
}
public class OnnxDemo {
public static void main(String[] args) {
try {
if (args.length < 2) {
Inference inference = new Inference("text-classify.onnx");
float[][] v1 = inference.predict(args[0]);
System.out.println(Arrays.toString(Vectors.softmax(v1[0])));
}
else {
Inference inference = new Inference("embeddings.onnx");
float[][] v1 = inference.predict(args[0]);
float[][] v2 = inference.predict(args[1]);
System.out.println(Vectors.similarity(v1[0], v2[0]));
}
}
catch (Exception ex) {
ex.printStackTrace();
}
}
}
```
## Run Text Classification in Java with ONNX
```
!./gradlew -q --console=plain onnx --args='"I am happy"' 2> /dev/null
!./gradlew -q --console=plain onnx --args='"I am mad"' 2> /dev/null
```
The command above tokenizes the input and runs inference with a text classification model previously created using a Java ONNX inference session.
As a reminder, the text classification model is judging sentiment using two labels, 0 for negative to 1 for positive. The results above shows the probability of each label per text snippet.
## Build sentence embeddings and compare similarity in Java with ONNX
```
!./gradlew -q --console=plain onnx --args='"I am happy" "I am glad"' 2> /dev/null
```
The sentence embeddings model produces vectors that can be used to compare semantic similarity, -1 being most dissimilar and 1 being most similar.
This is 100% Java, no API or remote calls, all within the JVM. Still think it's amazing!
# Rust
Last but not least, let's try Rust. The following sections initialize a Rust build environment and writes out the code necessary to run the ONNX inference.
```
%%capture
import os
os.chdir("/content")
!mkdir rust
os.chdir("/content/rust")
# Copy ONNX models
!cp ../text-classify.onnx .
!cp ../embeddings.onnx .
# Save copy of Bert Tokenizer
tokenizer.save_pretrained("bert")
# Install Rust
!apt-get install rustc
!mkdir -p src
%%writefile Cargo.toml
[package]
name = "onnx-test"
version = "1.0.0"
description = """
ONNX Runtime Rust test
"""
edition = "2018"
[dependencies]
onnxruntime = { version = "0.0.14"}
tokenizers = { version = "0.10.1"}
#@title
%%writefile src/main.rs
use onnxruntime::environment::Environment;
use onnxruntime::GraphOptimizationLevel;
use onnxruntime::ndarray::{Array2, Axis};
use onnxruntime::tensor::OrtOwnedTensor;
use std::env;
use tokenizers::decoders::wordpiece::WordPiece as WordPieceDecoder;
use tokenizers::models::wordpiece::WordPiece;
use tokenizers::normalizers::bert::BertNormalizer;
use tokenizers::pre_tokenizers::bert::BertPreTokenizer;
use tokenizers::processors::bert::BertProcessing;
use tokenizers::tokenizer::{Result, Tokenizer, EncodeInput};
fn tokenize(text: String, inputs: usize) -> Vec<Array2<i64>> {
// Load tokenizer
let mut tokenizer = Tokenizer::new(Box::new(
WordPiece::from_files("bert/vocab.txt")
.build()
.expect("Vocab file not found"),
));
tokenizer.with_normalizer(Box::new(BertNormalizer::default()));
tokenizer.with_pre_tokenizer(Box::new(BertPreTokenizer));
tokenizer.with_decoder(Box::new(WordPieceDecoder::default()));
tokenizer.with_post_processor(Box::new(BertProcessing::new(
(
String::from("[SEP]"),
tokenizer.get_model().token_to_id("[SEP]").unwrap(),
),
(
String::from("[CLS]"),
tokenizer.get_model().token_to_id("[CLS]").unwrap(),
),
)));
// Encode input text
let encoding = tokenizer.encode(EncodeInput::Single(text), true).unwrap();
let v1: Vec<i64> = encoding.get_ids().to_vec().into_iter().map(|x| x as i64).collect();
let v2: Vec<i64> = encoding.get_attention_mask().to_vec().into_iter().map(|x| x as i64).collect();
let v3: Vec<i64> = encoding.get_type_ids().to_vec().into_iter().map(|x| x as i64).collect();
let ids = Array2::from_shape_vec((1, v1.len()), v1).unwrap();
let mask = Array2::from_shape_vec((1, v2.len()), v2).unwrap();
let tids = Array2::from_shape_vec((1, v3.len()), v3).unwrap();
return if inputs > 2 { vec![ids, mask, tids] } else { vec![ids, mask] };
}
fn predict(text: String, softmax: bool) -> Vec<f32> {
// Start onnx session
let environment = Environment::builder()
.with_name("test")
.build().unwrap();
// Derive model path
let model = if softmax { "text-classify.onnx" } else { "embeddings.onnx" };
let mut session = environment
.new_session_builder().unwrap()
.with_optimization_level(GraphOptimizationLevel::Basic).unwrap()
.with_number_threads(1).unwrap()
.with_model_from_file(model).unwrap();
let inputs = tokenize(text, session.inputs.len());
// Run inference and print result
let outputs: Vec<OrtOwnedTensor<f32, _>> = session.run(inputs).unwrap();
let output: &OrtOwnedTensor<f32, _> = &outputs[0];
let probabilities: Vec<f32>;
if softmax {
probabilities = output
.softmax(Axis(1))
.iter()
.copied()
.collect::<Vec<_>>();
}
else {
probabilities= output
.iter()
.copied()
.collect::<Vec<_>>();
}
return probabilities;
}
fn similarity(v1: &Vec<f32>, v2: &Vec<f32>) -> f64 {
let mut dot = 0.0;
let mut norm1 = 0.0;
let mut norm2 = 0.0;
for x in 0..v1.len() {
dot += v1[x] * v2[x];
norm1 += v1[x].powf(2.0);
norm2 += v2[x].powf(2.0);
}
return dot as f64 / (norm1.sqrt() * norm2.sqrt()) as f64
}
fn main() -> Result<()> {
// Tokenize input string
let args: Vec<String> = env::args().collect();
if args.len() <= 2 {
let v1 = predict(args[1].to_string(), true);
println!("{:?}", v1);
}
else {
let v1 = predict(args[1].to_string(), false);
let v2 = predict(args[2].to_string(), false);
println!("{:?}", similarity(&v1, &v2));
}
Ok(())
}
```
## Run Text Classification in Rust with ONNX
```
!cargo run "I am happy" 2> /dev/null
!cargo run "I am mad" 2> /dev/null
```
The command above tokenizes the input and runs inference with a text classification model previously created using a Rust ONNX inference session.
As a reminder, the text classification model is judging sentiment using two labels, 0 for negative to 1 for positive. The results above shows the probability of each label per text snippet.
## Build sentence embeddings and compare similarity in Rust with ONNX
```
!cargo run "I am happy" "I am glad" 2> /dev/null
```
The sentence embeddings model produces vectors that can be used to compare semantic similarity, -1 being most dissimilar and 1 being most similar.
Once again, this is 100% Rust, no API or remote calls. And yes, still think it's amazing!
# Wrapping up
This notebook covered how to export models to ONNX using txtai. These models were then run in Python, JavaScript, Java and Rust. Golang was also evaluated but there doesn't currently appear to be a stable enough ONNX runtime available.
This method provides a way to train and run machine learning models using a number of programming languages on a number of platforms.
The following is a non-exhaustive list of use cases.
* Build locally executed models for mobile/edge devices
* Run models with Java/JavaScript/Rust development stacks when teams prefer not to add Python to the mix
* Export models to ONNX for Python inference to improve CPU performance and/or reduce number of software dependencies
| github_jupyter |
```
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sb
import numpy as np
import datetime
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import IsolationForest
import os
data_non_sensitive = pd.read_csv(r'C:\NICE Documents\Bank of Indonesia\agg_data_nonsensitive.csv')
data_sensitive = pd.read_csv(r'C:\NICE Documents\Bank of Indonesia\agg_data_sensitive.csv')
data_non_sensitive = data_non_sensitive.fillna(0)
data_sensitive = data_sensitive.fillna(0)
data_non_sensitive.rename(columns = {'#RIC': 'Currency_Pair'}, inplace=True)
data_sensitive.rename(columns = {'#RIC': 'Currency_Pair'}, inplace=True)
def train(x_train, contamination_ratio):
df_stats = x_train
df_anomalies = pd.DataFrame([])
for currency_pair in list(x_train.Currency_Pair.unique()):
df_stats_original = df_stats[df_stats.Currency_Pair == currency_pair]
df_x = df_stats_original.drop(['Currency_Pair','Year','Month','Day','Hour'], axis=1)
df_x_original = df_x.copy()
row_count = df_x.shape[0]
#get the estimators
if row_count < 5:
estimaters = 1
elif row_count < 20:
estimaters = 5
elif row_count < 50:
estimaters = 10
elif row_count < 100:
estimaters = 30
elif row_count < 200:
estimaters = 50
elif row_count < 500:
estimaters = 100
else:
estimaters = 200
model = IsolationForest(n_estimators=200, contamination=1.0, random_state=22)
model.fit(df_x)
#df_x['anomaly'] = pd.Series(model.predict(df_x_original)).values
#df_x['anomaly'] = df_x['anomaly'].map( {1: 0, -1: 1} )
#df_x_anomalous = df_x[df_x.anomaly==1].drop('anomaly',axis=1)
n = df_x_original.shape[0]
anomalous_data_index = pd.DataFrame(model.decision_function(df_x_original)).sort_values(by=0).index[0:n]
df_stats_original = df_stats_original.assign(Anomaly_Score=pd.Series(model.decision_function(df_x_original)).values)
df_anomalies_temp = df_stats_original.iloc[anomalous_data_index]
df_anomalies = df_anomalies.append(df_anomalies_temp, sort=False, ignore_index=True)
# Isolation forest returns negative score for anomalous records so converting them to positive.
df_anomalies['Anomaly_Score'] = df_anomalies.Anomaly_Score*-1
#Get the feature importance
df_anomaly_x = df_anomalies.drop(['Currency_Pair','Year','Month','Day', 'Hour', 'Anomaly_Score'], axis=1)
df_x_output = pd.DataFrame(df_anomaly_x.values)
f_list = list(df_anomaly_x.columns)
df_feature_anomalies = pd.DataFrame([], columns=f_list)
df_Currency_Pair_list = list(df_anomalies.Currency_Pair.unique())
scaler = MinMaxScaler()
df_in_anomalies = pd.DataFrame([])
for Currency_Pair in df_Currency_Pair_list:
df_anomalies_original = pd.DataFrame(df_anomalies[df_anomalies.Currency_Pair == Currency_Pair])
df_x = df_anomalies_original.drop(['Currency_Pair','Year','Month','Day', 'Hour', 'Anomaly_Score'], axis=1)
df_x = df_x.astype(np.float64)
df_x_original = df_x.copy()
scaled_values = scaler.fit_transform(df_x)
df = pd.DataFrame(scaled_values, columns=f_list)
df = df.subtract(df.mean(), axis=1)
df = abs(df)
df_feature_anomalies = pd.concat([df_feature_anomalies, df], axis=0, ignore_index=True, sort=False)
df_feature_anomalies.columns = ['Score_' + col for col in df_feature_anomalies.columns]
return df_anomalies, df_feature_anomalies
def pivot_and_format_data(df_anomalies, df_feature_anomalies, trade_timing):
df_feature_anomalies['Year'] = df_anomalies.Year
df_feature_anomalies['Month'] = df_anomalies.Month
df_feature_anomalies['Day'] = df_anomalies.Day
df_feature_anomalies['Hour'] = df_anomalies.Hour
df_feature_anomalies['Currency_Pair'] = df_anomalies.Currency_Pair
df_top5_score = df_anomalies.groupby('Currency_Pair')['Anomaly_Score'].nlargest(5).sum(level=0).reset_index()
#df_top3_score = df_anomalies.groupby('interaction_from')['Anomaly_Score'].nlargest(3).sum(level=0).reset_index()
df_top1_score = df_anomalies.groupby('Currency_Pair')['Anomaly_Score'].nlargest(1).sum(level=0).reset_index()
# in case nlargest does not return n values, it will take the 1st largest.
if df_top5_score.columns[0] == 'index':
df_top_n_score = df_top1_score
else:
df_top_n_score = df_top5_score
df_top_n_score.columns = ['Currency_Pair','Anomaly_Score_Agg']
df_anomalies_final = df_anomalies.merge(df_top_n_score, how='left', on='Currency_Pair')
anomaly_score = df_anomalies_final[['Currency_Pair','Year','Month', 'Day', 'Hour','Anomaly_Score', 'Anomaly_Score_Agg']]
df_anomalies = df_anomalies.drop('Anomaly_Score',axis=1)
#df_anomalies = df_anomalies.drop('Anomaly_Score',axis=1)
df_anomalies_pivot = df_anomalies.melt(id_vars=["Currency_Pair",'Year','Month', "Day", 'Hour'], var_name="Features_for_Anomaly", value_name="Feature_Stats")
df_feature_anomalies_pivot = df_feature_anomalies.melt(id_vars=["Currency_Pair",'Year','Month', "Day", 'Hour'], var_name="Reasons_for_Anomaly", value_name="Feature_Score")
df_anomalies_pivot = df_anomalies_pivot.set_index(["Currency_Pair",'Year','Month', "Day", 'Hour', df_anomalies_pivot.groupby(["Currency_Pair",'Year','Month', "Day", 'Hour']).cumcount()])
df_feature_anomalies_pivot = df_feature_anomalies_pivot.set_index(["Currency_Pair",'Year','Month', "Day", 'Hour', df_feature_anomalies_pivot.groupby(["Currency_Pair",'Year','Month', "Day", 'Hour']).cumcount()])
df_anomaly_feature_merged_pivot = (pd.concat([df_anomalies_pivot, df_feature_anomalies_pivot],axis=1)
.sort_index(level=2)
.reset_index(level=5, drop=True)
.reset_index())
feature_stats_median = df_anomaly_feature_merged_pivot.groupby(['Currency_Pair','Features_for_Anomaly']).Feature_Stats.median().reset_index()
feature_stats_median.columns = ['Currency_Pair','Features_for_Anomaly','Features_Stats_Meadian']
anomaly_stats_feature_score = df_anomaly_feature_merged_pivot.merge(feature_stats_median, on = ['Currency_Pair','Features_for_Anomaly'], how='left')
#anomaly_stats_feature_score = anomaly_stats_feature_score.assign(Algorithm='isolation-forest', duration_window = duration_window)
anomaly_stats_feature_score.drop('Reasons_for_Anomaly', axis=1, inplace=True)
anomaly_stats_feature_score = anomaly_stats_feature_score.round(3)
anomaly_score = anomaly_score.round(3)
anomaly_stats_feature_score['Trade_Timing'] = trade_timing
anomaly_score['Trade_Timing'] = trade_timing
return anomaly_score, anomaly_stats_feature_score
df_anomalies, df_feature_anomalies = train(data_non_sensitive, 1.0)
anomaly_score_1, anomaly_stats_feature_score_1 = pivot_and_format_data(df_anomalies, df_feature_anomalies, 'Normal Trading Time Periods')
df_anomalies, df_feature_anomalies = train(data_sensitive, 1.0)
anomaly_score_2, anomaly_stats_feature_score_2 = pivot_and_format_data(df_anomalies, df_feature_anomalies, 'Price Sensitive Trading Time Periods')
anomaly_stats_feature_score = pd.concat([anomaly_stats_feature_score_1, anomaly_stats_feature_score_2], axis=0)
anomaly_score = pd.concat([anomaly_score_1, anomaly_score_2], axis=0)
data_non_sensitive['Trade_Timing'] = 'Normal Trading Time Periods'
data_sensitive['Trade_Timing'] = 'Price Sensitive Trading Time Periods'
data = pd.concat([data_non_sensitive, data_sensitive], axis=0)
anomaly_stats_feature_score.to_csv(r'C:\NICE Documents\Bank of Indonesia\anomaly_stats_feature_score.csv', index=False)
anomaly_score.to_csv(r'C:\NICE Documents\Bank of Indonesia\anomaly_score.csv', index=False)
data.round(3).to_csv(r'C:\NICE Documents\Bank of Indonesia\Feature_Stats.csv', index=False)
anomaly_stats_feature_score.dtypes
df_anomaly
```
| github_jupyter |
# Creating DataFrames
We will create `DataFrame` objects from other data structures in Python, by reading in a CSV file, and by querying a database.
## About the Data
In this notebook, we will be working with Earthquake data from September 18, 2018 - October 13, 2018 (obtained from the US Geological Survey (USGS) using the [USGS API](https://earthquake.usgs.gov/fdsnws/event/1/))
## Imports
```
import datetime
import numpy as np
import pandas as pd
```
## Creating a `Series`
```
np.random.seed(0) # set a seed for reproducibility
pd.Series(np.random.rand(5), name='random')
```
## Creating a `DataFrame` from a `Series`
Use the `to_frame()` method:
```
pd.Series(np.linspace(0, 10, num=5)).to_frame()
```
## Creating a `DataFrame` from Python Data Structures
### From a dictionary of list-like structures
The dictionary values can be lists, numpy arrays, etc. as long as they have length (generators don't have length so we can't use them here):
```
np.random.seed(0) # set seed so result is reproducible
pd.DataFrame(
{
'random': np.random.rand(5),
'text': ['hot', 'warm', 'cool', 'cold', None],
'truth': [np.random.choice([True, False]) for _ in range(5)]
},
index=pd.date_range(
end=datetime.date(2019, 4, 21),
freq='1D',
periods=5,
name='date'
)
)
```
### From a list of dictionaries
```
pd.DataFrame([
{'mag' : 5.2, 'place' : 'California'},
{'mag' : 1.2, 'place' : 'Alaska'},
{'mag' : 0.2, 'place' : 'California'},
])
```
### From a list of tuples
This is equivalent to using `pd.DataFrame.from_records()`:
```
list_of_tuples = [(n, n**2, n**3) for n in range(5)]
list_of_tuples
pd.DataFrame(
list_of_tuples,
columns=['n', 'n_squared', 'n_cubed']
)
```
### From a NumPy array
```
pd.DataFrame(
np.array([
[0, 0, 0],
[1, 1, 1],
[2, 4, 8],
[3, 9, 27],
[4, 16, 64]
]), columns=['n', 'n_squared', 'n_cubed']
)
```
## Creating a `DataFrame` by Reading in a CSV File
### Finding information on the file before reading it in
Before attempting to read in a file, we can use the command line to see important information about the file that may determine how we read it in. We can run command line code from Jupyter Notebooks (thanks to IPython) by using `!` before the code.
#### Number of lines (row count)
For example, we can find out how many lines are in the file by using the `wc` utility (word count) and counting lines in the file (`-l`). The file has 9,333 lines:
```
!wc -l data/earthquakes.csv
```
#### File size
We can find the file size by using `ls` to list the files in the `data` directory, and passing in the flags `-lh` to include the file size in human readable format. Then we use `grep` to find the file in question. Note that the `|` passes the result of `ls` to `grep`. `grep` is used for finding items that match patterns.
This tells us the file is 3.4 MB:
```
!ls -lh data | grep earthquakes.csv
```
We can even capture the result of a command and use it in our Python code:
```
files = !ls -lh data
[file for file in files if 'earthquake' in file]
```
#### Examining a few rows
We can use `head` to look at the top `n` rows of the file. With the `-n` flag, we can specify how many. This shows use that the first row of the file contains headers and that it is comma-separated (just because the file extension is `.csv` doesn't mean they have to be comma-separated values):
```
!head -n 2 data/earthquakes.csv
```
Just like `head` gives rows from the top, `tail` gives rows from the bottom. This can help us check that there is no extraneous data on the bottom of the field, like perhaps some metadata about the fields that actually isn't part of the data set:
```
!tail -n 2 data/earthquakes.csv
```
#### Column count
We can use `awk` to find the column count. This is a Linux utility for pattern scanning and processing. The `-F` flag allows us to specify the delimiter (comma, in this case). Then we specify what to do for each record in the file. We choose to print `NF` which is a predefined variable whose value is the number of fields in the current record. Here, we say `exit` so that we print the number of fields (columns, here) in the first row of the file, then we stop.
This tells us we have 26 data columns:
```
!awk -F',' '{print NF; exit}' data/earthquakes.csv
```
Since we know the 1st line of the file had headers, and the file is comma-separated, we can also count the columns by using `head` to get headers and parsing them in Python:
```
headers = !head -n 1 data/earthquakes.csv
len(headers[0].split(','))
```
### Reading in the file
Our file is small in size, has headers in the first row, and is comma-separated, so we don't need to provide any additional arguments to read in the file with `pd.read_csv()`, but be sure to check the [documentation](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) for possible arguments:
```
df = pd.read_csv('data/earthquakes.csv')
```
## Writing a `DataFrame` to a CSV File
Note that the index of `df` is just row numbers, so we don't want to keep it.
```
df.to_csv('output.csv', index=False)
```
## Writing a `DataFrame` to a Database
Note the `if_exists` parameter. By default it will give you an error if you try to write a table that already exists. Here, we don't care if it is overwritten. Lastly, if we are interested in appending new rows, we set that to `'append'`.
```
import sqlite3
with sqlite3.connect('data/quakes.db') as connection:
pd.read_csv('data/tsunamis.csv').to_sql(
'tsunamis', connection, index=False, if_exists='replace'
)
```
## Creating a `DataFrame` by Querying a Database
Using a SQLite database. Otherwise you need to install [SQLAlchemy](https://www.sqlalchemy.org/).
```
import sqlite3
with sqlite3.connect('data/quakes.db') as connection:
tsunamis = pd.read_sql('SELECT * FROM tsunamis', connection)
tsunamis.head()
```
| github_jupyter |
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/mlops" target="_blank">top MLOps</a> repositories on GitHub
</div>
<br>
<hr>
# Set up
> Run this notebook as a Jupyter lab notebook using your virtual environment because we depend on preinstalled packages and our datasets. We don't need to use Google Colab because we don't need a GPU.
```
# High quality plots
%config InlineBackend.figure_format = "svg"
# Install Alibi (Seldon)
!pip install alibi-detect==0.6.2 -q
```
# Performance
Illustrating the need to monitor metrics at various window sizes to catch performance degradation as soon as possible. Here we're monitoring an overall metric but we can do the same for slices of data, individual classes, etc. For example, if we monitor the performance on a specific tag, we may be able to quickly catch new algorithms that were released for that tag (ex. new transformer architecture).
```
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_theme()
# Generate data
hourly_f1 = list(np.random.uniform(94, 98, 24*20)) + \
list(np.random.uniform(92, 96, 24*5)) + \
list(np.random.uniform(88, 96, 24*5)) + \
list(np.random.uniform(86, 92, 24*5))
# Rolling f1
rolling_f1 = [np.mean(hourly_f1[:n]) for n in range(1, len(hourly_f1)+1)]
print (f"Average rolling f1 on the last day: {np.mean(rolling_f1[-24:]):.1f}")
# Window f1
window_size = 24
window_f1 = np.convolve(hourly_f1, np.ones(window_size)/window_size, mode="valid")
print (f"Average window f1 on the last day: {np.mean(window_f1[-24:]):.1f}")
plt.ylim([80, 100])
plt.hlines(y=90, xmin=0, xmax=len(hourly_f1), colors="blue", linestyles="dashed", label="threshold")
plt.plot(rolling_f1, label="rolling")
plt.plot(window_f1, label="window")
plt.legend()
# Generate data
gradual = list(np.random.uniform(96, 98, 20)) + \
list(np.random.uniform(94, 96, 20)) + \
list(np.random.uniform(92, 94, 20)) + \
list(np.random.uniform(90, 92, 20))
abrupt = list(np.random.uniform(92, 94, 20)) + \
list(np.random.uniform(90, 92, 20)) + \
list(np.random.uniform(78, 80, 40))
periodic = list(np.random.uniform(87, 89, 15)) + \
list(np.random.uniform(81, 85, 25)) + \
list(np.random.uniform(87, 89, 15)) + \
list(np.random.uniform(81, 85, 25))
plt.ylim([70, 100])
plt.plot(gradual, label="gradual")
plt.plot(abrupt, label="abrupt")
plt.plot(periodic, label="periodic")
plt.legend()
```
# Data
```
import great_expectations as ge
import pandas as pd
from pathlib import Path
from config import config
from tagifai import utils
# Create DataFrame
features_fp = Path(config.DATA_DIR, "features.json")
features = utils.load_dict(filepath=features_fp)
df = ge.dataset.PandasDataset(features)
```
# Expectations
Rule-based expectations that must pass.
```
# Simulated production data
prod_df = ge.dataset.PandasDataset([{"text": "hello"}, {"text": 0}, {"text": "world"}])
# Expectation suite
df.expect_column_values_to_not_be_null(column="text")
df.expect_column_values_to_be_of_type(column="text", type_="str")
expectation_suite = df.get_expectation_suite()
# Validate reference data
df.validate(expectation_suite=expectation_suite, only_return_failures=True)["statistics"]
# Validate production data
prod_df.validate(expectation_suite=expectation_suite, only_return_failures=True)["statistics"]
```
# Drift detection on univariate data
### Kolmogorov-Smirnov (KS) test
KS test for detecting data drift on input sequence length. We can monitor aspects of our data that aren't necessarily inputs to the model (ex. length of input text).
```
from alibi_detect.cd import KSDrift
# Reference
df["num_tags"] = df.tags.apply(lambda x: len(x))
reference = df["num_tags"][-400:-200].to_numpy()
plt.hist(reference, alpha=0.75, label="reference")
plt.legend()
plt.show()
# Initialize drift detector
length_drift_detector = KSDrift(reference, p_val=0.01)
# No drift
no_drift = df["num_tags"][-200:].to_numpy()
length_drift_detector.predict(no_drift, return_p_val=True, return_distance=True)
plt.hist(reference, alpha=0.75, label="reference")
plt.hist(no_drift, alpha=0.5, label="production")
plt.legend()
plt.show()
# Predict on no drift data
length_drift_detector.predict(no_drift, return_p_val=True, return_distance=True)
# Drift
drift = np.random.normal(10, 2, len(reference))
length_drift_detector.predict(drift, return_p_val=True, return_distance=True)
plt.hist(reference, alpha=0.75, label="reference")
plt.hist(drift, alpha=0.5, label="production")
plt.legend()
plt.show()
# Predict on drift data
length_drift_detector.predict(drift, return_p_val=True, return_distance=True)
```
### Chi-squared test
Detecting drift on categorical variables (can be used for data or target drift).
```
from alibi_detect.cd import ChiSquareDrift
# Reference
df.tag_count = df.tags.apply(lambda x: "small" if len(x) <= 3 else ("medium" if len(x) <= 8 else "large"))
reference = df.tag_count[-400:-200].to_numpy()
plt.hist(reference, alpha=0.75, label="reference")
plt.legend()
target_drift_detector = ChiSquareDrift(reference, p_val=0.01)
# No drift
no_drift = df.tag_count[-200:].to_numpy()
plt.hist(reference, alpha=0.75, label="reference")
plt.hist(no_drift, alpha=0.5, label="production")
plt.legend()
target_drift_detector.predict(no_drift, return_p_val=True, return_distance=True)
# Drift
drift = np.array(["small"]*80 + ["medium"]*40 + ["large"]*80)
plt.hist(reference, alpha=0.75, label="reference")
plt.hist(drift, alpha=0.5, label="production")
plt.legend()
target_drift_detector.predict(drift, return_p_val=True, return_distance=True)
```
# Drfit detection on multivariate data
We can't use encoded text because each character's categorical representation is arbitrary. However, the embedded text's representation does capture semantic meaning which makes it possible for us to detect drift on. With tabular data and images, we can use those numerical representation as is (can preprocess if needed) since the values are innately meaningful.
```
import torch
import torch.nn as nn
from tagifai import data, main
# Set device
device = utils.set_device(cuda=False)
# Load model
run_id = open(Path(config.MODEL_DIR, "run_id.txt")).read()
artifacts = main.load_artifacts(run_id=run_id)
# Retrieve artifacts
params = artifacts["params"]
label_encoder = artifacts["label_encoder"]
tokenizer = artifacts["tokenizer"]
embeddings_layer = artifacts["model"].embeddings
embedding_dim = embeddings_layer.embedding_dim
def get_data_tensor(texts):
preprocessed_texts = [data.preprocess(text, lower=params.lower, stem=params.stem) for text in texts]
X = np.array(tokenizer.texts_to_sequences(preprocessed_texts), dtype="object")
y_filler = np.zeros((len(X), len(label_encoder)))
dataset = data.CNNTextDataset(X=X, y=y_filler, max_filter_size=int(params.max_filter_size))
dataloader = dataset.create_dataloader(batch_size=len(texts))
return next(iter(dataloader))[0]
# Reference
reference = get_data_tensor(texts=df.text[-400:-200].to_list())
reference.shape
```
### Dimensionality reduction (via UAE)
```
from functools import partial
from alibi_detect.cd.pytorch import preprocess_drift
# Untrained autoencoder (UAE) reducer
enc_dim = 32
reducer = nn.Sequential(
embeddings_layer,
nn.AdaptiveAvgPool2d((1, embedding_dim)),
nn.Flatten(),
nn.Linear(embedding_dim, 256),
nn.ReLU(),
nn.Linear(256, enc_dim)
).to(device).eval()
# Preprocessing with the reducer
preprocess_fn = partial(preprocess_drift, model=reducer, batch_size=params.batch_size)
```
### Maximum Mean Discrepancy (MMD)
```
from alibi_detect.cd import MMDDrift
# Initialize drift detector
embeddings_mmd_drift_detector = MMDDrift(reference, backend="pytorch", p_val=.01, preprocess_fn=preprocess_fn)
# No drift
no_drift = get_data_tensor(texts=df.text[-200:].to_list())
embeddings_mmd_drift_detector.predict(no_drift)
# No drift (with benign injection)
texts = ["BERT " + text for text in df.text[-200:].to_list()]
drift = get_data_tensor(texts=texts)
embeddings_mmd_drift_detector.predict(drift)
# Drift
texts = ["UNK " + text for text in df.text[-200:].to_list()]
drift = get_data_tensor(texts=texts)
embeddings_mmd_drift_detector.predict(drift)
```
We could repeat this process for tensor outputs at various layers in our model (embedding, conv layers, softmax, etc.). Just keep in mind that our outputs from the reducer need to be a 2D matrix so we may need to do additional preprocessing such as pooling 3D embedding tensors.
> [TorchDrift](https://torchdrift.org/) is another great package that offers a suite of reducers (PCA, AE, etc.) and drift detectors (MMD) to monitor for drift at any stage in our model.
### Kolmogorov-Smirnov (KS) test + Bonferroni correction
```
# Initialize drift detector
embeddings_ks_drift_detector = KSDrift(reference, p_val=.01, preprocess_fn=preprocess_fn, correction="bonferroni")
# No drift
no_drift = get_data_tensor(texts=df.text[-200:].to_list())
embeddings_ks_drift_detector.predict(no_drift)
# Drift
texts = ["UNK " + text for text in df.text[-200:].to_list()]
drift = get_data_tensor(texts=texts)
embeddings_ks_drift_detector.predict(drift)
```
Note that each feature (`enc_dim`=32) has a distance and an associated p-value.
| github_jupyter |
[Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
# Widget Events
## Special events
The `Button` is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The `on_click` method of the `Button` can be used to register function to be called when the button is clicked. The doc string of the `on_click` can be seen below.
```
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
```
### Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the `on_click` method, a button that prints a message when it has been clicked is shown below. To capture `print`s (or any other kind of output) and ensure it is displayed, be sure to send it to an `Output` widget (or put the information you want to display into an `HTML` widget).
```
from IPython.display import display
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("Button clicked.")
button.on_click(on_button_clicked)
```
## Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the `observe` method of the widget can be used to register a callback. The doc string for `observe` can be seen below.
```
print(widgets.Widget.observe.__doc__)
```
### Signatures
Mentioned in the doc string, the callback registered must have the signature `handler(change)` where `change` is a dictionary holding the information about the change.
Using this method, an example of how to output an `IntSlider`'s value as it is changed can be seen below.
```
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
with output2:
print(change['new'])
int_range.observe(on_value_change, names='value')
```
## Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
### Linking traitlets attributes in the kernel
The first method is to use the `link` and `dlink` functions from the `traitlets` module (these two functions are re-exported by the `ipywidgets` module for convenience). This only works if we are interacting with a live kernel.
```
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
l = widgets.link((sliders1, 'value'), (slider2, 'value'))
display(caption, sliders1, slider2)
caption = widgets.Label(value='Changes in source values are reflected in target1')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
dl = widgets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
```
Function `traitlets.link` and `traitlets.dlink` return a `Link` or `DLink` object. The link can be broken by calling the `unlink` method.
```
l.unlink()
dl.unlink()
```
### Registering callbacks to trait changes in the kernel
Since attributes of widgets on the Python side are traitlets, you can register handlers to the change events whenever the model gets updates from the front-end.
The handler passed to observe will be called with one change argument. The change object holds at least a `type` key and a `name` key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.
Other keys may be passed depending on the value of `type`. In the case where type is `change`, we also have the following keys:
- `owner` : the HasTraits instance
- `old` : the old value of the modified trait attribute
- `new` : the new value of the modified trait attribute
- `name` : the name of the modified trait attribute.
```
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
```
### Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
Javascript links persist when embedding widgets in html web pages without a kernel.
```
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
```
Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method.
```
# l.unlink()
# dl.unlink()
```
### The difference between linking in the kernel and linking in the client
Linking in the kernel means linking via python. If two sliders are linked in the kernel, when one slider is changed the browser sends a message to the kernel (python in this case) updating the changed slider, the link widget in the kernel then propagates the change to the other slider object in the kernel, and then the other slider's kernel object sends a message to the browser to update the other slider's views in the browser. If the kernel is not running (as in a static web page), then the controls will not be linked.
Linking using jslink (i.e., on the browser side) means contructing the link in Javascript. When one slider is changed, Javascript running in the browser changes the value of the other slider in the browser, without needing to communicate with the kernel at all. If the sliders are attached to kernel objects, each slider will update their kernel-side objects independently.
To see the difference between the two, go to the [static version of this page in the ipywidgets documentation](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html) and try out the sliders near the bottom. The ones linked in the kernel with `link` and `dlink` are no longer linked, but the ones linked in the browser with `jslink` and `jsdlink` are still linked.
## Continuous updates
Some widgets offer a choice with their `continuous_update` attribute between continually updating values or only updating values when a user submits the value (for example, by pressing Enter or navigating away from the control). In the next example, we see the "Delayed" controls only transmit their value after the user finishes dragging the slider or submitting the textbox. The "Continuous" controls continually transmit their values as they are changed. Try typing a two-digit number into each of the text boxes, or dragging each of the sliders, to see the difference.
```
a = widgets.IntSlider(description="Delayed", continuous_update=False)
b = widgets.IntText(description="Delayed", continuous_update=False)
c = widgets.IntSlider(description="Continuous", continuous_update=True)
d = widgets.IntText(description="Continuous", continuous_update=True)
widgets.link((a, 'value'), (b, 'value'))
widgets.link((a, 'value'), (c, 'value'))
widgets.link((a, 'value'), (d, 'value'))
widgets.VBox([a,b,c,d])
```
Sliders, `Text`, and `Textarea` controls default to `continuous_update=True`. `IntText` and other text boxes for entering integer or float numbers default to `continuous_update=False` (since often you'll want to type an entire number before submitting the value by pressing enter or navigating out of the box).
## Debouncing
When trait changes trigger a callback that performs a heavy computation, you may want to not do the computation as often as the value is updated. For instance, if the trait is driven by a slider which has its `continuous_update` set to `True`, the user will trigger a bunch of computations in rapid succession.
Debouncing solves this problem by delaying callback execution until the value has not changed for a certain time, after which the callback is called with the latest value. The effect is that the callback is only called when the trait pauses changing for a certain amount of time.
Debouncing can be implemented using an asynchronous loop or threads. We show an asynchronous solution below, which is more suited for ipywidgets. If you would like to instead use threads to do the debouncing, replace the `Timer` class with `from threading import Timer`.
```
import asyncio
class Timer:
def __init__(self, timeout, callback):
self._timeout = timeout
self._callback = callback
self._task = asyncio.ensure_future(self._job())
async def _job(self):
await asyncio.sleep(self._timeout)
self._callback()
def cancel(self):
self._task.cancel()
def debounce(wait):
""" Decorator that will postpone a function's
execution until after `wait` seconds
have elapsed since the last time it was invoked. """
def decorator(fn):
timer = None
def debounced(*args, **kwargs):
nonlocal timer
def call_it():
fn(*args, **kwargs)
if timer is not None:
timer.cancel()
timer = Timer(wait, call_it)
return debounced
return decorator
```
Here is how we use the `debounce` function as a decorator. Try changing the value of the slider. The text box will only update after the slider has paused for about 0.2 seconds.
```
slider = widgets.IntSlider()
text = widgets.IntText()
@debounce(0.2)
def value_changed(change):
text.value = change.new
slider.observe(value_changed, 'value')
widgets.VBox([slider, text])
```
## Throttling
Throttling is another technique that can be used to limit callbacks. Whereas debouncing ignores calls to a function if a certain amount of time has not passed since the last (attempt of) call to the function, throttling will just limit the rate of calls. This ensures that the function is regularly called.
We show an synchronous solution below. Likewise, you can replace the `Timer` class with `from threading import Timer` if you want to use threads instead of asynchronous programming.
```
import asyncio
from time import time
def throttle(wait):
""" Decorator that prevents a function from being called
more than once every wait period. """
def decorator(fn):
time_of_last_call = 0
scheduled = False
new_args, new_kwargs = None, None
def throttled(*args, **kwargs):
nonlocal new_args, new_kwargs, time_of_last_call, scheduled
def call_it():
nonlocal new_args, new_kwargs, time_of_last_call, scheduled
time_of_last_call = time()
fn(*new_args, **new_kwargs)
scheduled = False
time_since_last_call = time() - time_of_last_call
new_args = args
new_kwargs = kwargs
if not scheduled:
new_wait = max(0, wait - time_since_last_call)
Timer(new_wait, call_it)
scheduled = True
return throttled
return decorator
```
To see how different it behaves compared to the debouncer, here is the same slider example with its throttled value displayed in the text box. Notice how much more interactive it is, while still limiting the callback rate.
```
slider = widgets.IntSlider()
text = widgets.IntText()
@throttle(0.2)
def value_changed(change):
text.value = change.new
slider.observe(value_changed, 'value')
widgets.VBox([slider, text])
```
[Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
| github_jupyter |
# Figure 8: “Easy” and “hard” PSDs
```
from pathlib import Path
import matplotlib.pyplot as plt
import mne
import numpy as np
import yaml
from fooof import FOOOF
from fooof.sim.gen import gen_aperiodic
from mne.time_frequency import psd_welch
from utils import annotate_range, irasa, detect_plateau_onset
```
#### Load params and make directory
```
yaml_file = open('params.yml')
parsed_yaml_file = yaml.load(yaml_file, Loader=yaml.FullLoader)
globals().update(parsed_yaml_file)
Path(fig_path).mkdir(parents=True, exist_ok=True)
```
#### Load empirical data of dataset 1 and calc PSD
```
# Load data
data_path = "../data/Fig8/"
sub5 = mne.io.read_raw_fif(data_path + "subj5_on_R1_raw.fif", preload=True)
sub9 = mne.io.read_raw_fif(data_path + "subj9_on_R8_raw.fif", preload=True)
ch5 = "SMA"
ch9 = "STN_R01"
sub5.pick_channels([ch5])
sub9.pick_channels([ch9])
# here it is fine to notch filter at 50 Hz because the filter does not
# cause a dip in the PSD
filter_params = {"freqs": np.arange(50, 601, 50),
"notch_widths": 0.1,
"method": "spectrum_fit"}
sub5.notch_filter(**filter_params)
sub9.notch_filter(**filter_params)
sample_rate = 2400
# Calc PSD
welch_params_b = {"fmin": 1,
"fmax": 600,
"tmin": 0.5,
"tmax": 185,
"n_fft": sample_rate,
"n_overlap": sample_rate // 2,
"average": "mean"}
PSD_sub5, freq = psd_welch(sub5, **welch_params_b)
PSD_sub9, freq = psd_welch(sub9, **welch_params_b)
PSD_sub5 = PSD_sub5[0]
PSD_sub9 = PSD_sub9[0]
```
#### Apply fooof
```
# %% Fooof fit
freq_range = [1, 95]
fooof_params = dict(verbose=False, peak_width_limits=(0.5, 150))
fm_sub5 = FOOOF(**fooof_params)
fm_sub9 = FOOOF(**fooof_params)
# Empirical fit
fm_sub5.fit(freq, PSD_sub5, freq_range)
fm_sub9.fit(freq, PSD_sub9, freq_range)
# Extract aperiodic PSD from empirical fit
ap_fooof_fit_sub5 = gen_aperiodic(fm_sub5.freqs, fm_sub5.aperiodic_params_)
ap_fooof_fit_sub9 = gen_aperiodic(fm_sub9.freqs, fm_sub9.aperiodic_params_)
exponent_sub5 = fm_sub5.aperiodic_params_[1]
exponent_sub9 = fm_sub9.aperiodic_params_[1]
# Get peak params
(center_freq_sub5_1,
peak_power_sub5_1,
peak_width_sub5_1) = fm_sub5.peak_params_[1]
(center_freq_sub5_2,
peak_power_sub5_2,
peak_width_sub5_2) = fm_sub5.peak_params_[2]
(center_freq_sub5_3,
peak_power_sub5_3,
peak_width_sub5_3) = fm_sub5.peak_params_[3]
(center_freq_sub9_1,
peak_power_sub9_1,
peak_width_sub9_1) = fm_sub9.peak_params_[0]
(center_freq_sub9_2,
peak_power_sub9_2,
peak_width_sub9_2) = fm_sub9.peak_params_[1]
```
#### Connect a straight line between 1Hz and 95Hz
```
# straight "fit"
DeltaX = np.log10(np.diff(freq_range)[0])
offset_sub5 = np.log10(PSD_sub5[freq == freq_range[0]][0])
endpoint_sub5 = np.log10(PSD_sub5[freq == freq_range[1]][0])
DeltaY_sub5 = offset_sub5 - endpoint_sub5
offset_sub9 = np.log10(PSD_sub9[freq == freq_range[0]][0])
endpoint_sub9 = np.log10(PSD_sub9[freq == freq_range[1]][0])
DeltaY_sub9 = offset_sub9 - endpoint_sub9
exponent_sub5_straight = DeltaY_sub5 / DeltaX
exponent_sub9_straight = DeltaY_sub9 / DeltaX
ap_straight_fit_sub5 = gen_aperiodic(fm_sub5.freqs,
np.array([offset_sub5,
exponent_sub5_straight]))
ap_straight_fit_sub9 = gen_aperiodic(fm_sub9.freqs,
np.array([offset_sub9,
exponent_sub9_straight]))
```
#### Apply IRASA
```
# get timeseries data
get_data = dict(start=sample_rate//2, stop=sample_rate*180,
reject_by_annotation="NaN")
irasa_sub5 = sub5.get_data(**get_data)
irasa_sub9 = sub9.get_data(**get_data)
# apply IRASA and unpack
freq_I, _, _, irasa_params_sub5 = irasa(irasa_sub5, band=freq_range,
sf=sample_rate)
_, _, _, irasa_params_sub9 = irasa(irasa_sub9, band=freq_range, sf=sample_rate)
# extract results
irasa_offset_sub5 = irasa_params_sub5["Intercept"][0]
irasa_offset_sub9 = irasa_params_sub9["Intercept"][0]
exponent_irasa_sub5 = -irasa_params_sub5["Slope"][0]
exponent_irasa_sub9 = -irasa_params_sub9["Slope"][0]
# Generate 1/f based on results
ap_irasa_fit_sub5 = gen_aperiodic(freq_I,
np.array([irasa_offset_sub5,
exponent_irasa_sub5]))
ap_irasa_fit_sub9 = gen_aperiodic(freq_I,
np.array([irasa_offset_sub9,
exponent_irasa_sub9]))
# pack lines for plotting
PSD_plot_sub5 = (freq, PSD_sub5, c_real)
PSD_plot_sub9 = (freq, PSD_sub9, c_real)
fooof_plot_sub5 = (fm_sub5.freqs, 10**ap_fooof_fit_sub5, c_fit_fooof)
fooof_plot_sub9 = (fm_sub9.freqs, 10**ap_fooof_fit_sub9, c_fit_fooof)
straight_plot_sub5 = (fm_sub5.freqs, 10**ap_straight_fit_sub5, c_fit_straight)
straight_plot_sub9 = (fm_sub9.freqs, 10**ap_straight_fit_sub9, c_fit_straight)
irasa_plot_sub5 = (freq_I, 10**ap_irasa_fit_sub5, c_fit_irasa)
irasa_plot_sub9 = (freq_I, 10**ap_irasa_fit_sub9, c_fit_irasa)
```
#### Plot settings
```
panel_labels = dict(x=0, y=1.02, fontsize=panel_fontsize,
fontdict=dict(fontweight="bold"))
panel_description = dict(x=0, y=1.02, fontsize=panel_fontsize)
```
# Figure 8
```
fig, ax = plt.subplots(2, 2, figsize=(fig_width, 5), sharey="row")
ax[0, 0].text(s=' "Easy" spectrum', **panel_description,
transform=ax[0, 0].transAxes)
ax[1, 0].text(s=' "Hard" spectrum', **panel_description,
transform=ax[1, 0].transAxes)
# lin
ax[0, 0].semilogy(*PSD_plot_sub5, label="Sub 5 MEG") # + ch5)
ax[1, 0].semilogy(*PSD_plot_sub9, label="Sub 9 LFP") # + ch9)
# log
ax[0, 1].loglog(*PSD_plot_sub5, label="Sub 5 MEG")
ax[1, 1].loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax[0, 1].loglog(*fooof_plot_sub5,
label=rf"FOOOF $\beta=${exponent_sub5:.2f}")
ax[1, 1].loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax[0, 1].loglog(*straight_plot_sub5,
label=rf"straight $\beta=${exponent_sub5_straight:.2f}")
ax[1, 1].loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax[0, 1].loglog(*irasa_plot_sub5,
label=rf"IRASA $\beta=${exponent_irasa_sub5:.2f}")
ax[1, 1].loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
for axes in ax.flatten():
axes.spines["top"].set_visible(False)
axes.spines["right"].set_visible(False)
# Legend
handles, labels = ax[0, 1].get_legend_handles_labels()
ax[0, 0].legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9)
handles, labels = ax[1, 1].get_legend_handles_labels()
ax[1, 0].legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9)
# Add Plateau rectangle
ylim_b = (5e-3, 6)
xlim_b = ax[1, 0].get_xlim()
noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
rec_xy = (noise_start, ylim_b[0])
rec_width = freq[-1] - noise_start
rec_height = np.diff(ylim_b)[0]
rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
alpha=.15, color="r")
ax[1, 1].add_patch(plt.Rectangle(**rect_c))
# Add Plateau annotation
ax[1, 1].hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
linewidth=1)
ax[1, 1].annotate(s="Early\nPlateau\nonset",
xy=(noise_start, PSD_sub9[noise_start]),
xytext=(noise_start, PSD_sub9[noise_start]*20),
arrowprops=dict(arrowstyle="->", shrinkB=5),
color="k", fontsize=8,
ha="left",
verticalalignment="center")
# Add Peak width annotation
height1 = 100
xmin1 = center_freq_sub5_1 - peak_width_sub5_1
xmax1 = center_freq_sub5_1 + peak_width_sub5_1
annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# Add Peak width annotation
height1 = .029
height2 = 0.009
xmin1 = center_freq_sub9_1 - peak_width_sub9_1
xmax1 = center_freq_sub9_1 + peak_width_sub9_1
xmin2 = center_freq_sub9_2 - peak_width_sub9_2
xmax2 = center_freq_sub9_2 + peak_width_sub9_2
annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
# Add indication of peak overlap as vertical arrow
overlap = 15
arr_height = 1
ax[1, 1].annotate(s="", xy=(overlap, PSD_sub9[overlap]),
xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
arrowprops=dict(arrowstyle="<->"))
ax[1, 1].annotate(s="", xy=(center_freq_sub9_1, arr_height),
xytext=(center_freq_sub9_2, arr_height),
arrowprops=dict(arrowstyle="<->"))
ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
va="center", fontsize=8)
ax[1, 1].text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
va="top", fontsize=8)
# Annotate orders of magnitude
diff5 = PSD_sub5[0] / PSD_sub5.min()
ord_magn5 = int(np.round(np.log10(diff5)))
x_line = -25
ax[0, 0].annotate(s="",
xy=(x_line, PSD_sub5[0]),
xytext=(x_line, PSD_sub5.min()),
arrowprops=dict(arrowstyle="|-|,widthA=.5,widthB=.5",
lw=1.3),
ha="center")
ax[0, 0].text(s=rf"$\Delta PSD\approx 10^{{{ord_magn5}}}$", x=30,
y=np.sqrt(PSD_sub5[0]*PSD_sub5[-1]), va="center", fontsize=8)
diff9 = PSD_sub9[0] / PSD_sub9.min()
ord_magn9 = int(np.round(np.log10(diff9)))
x_line = -25
ax[1, 0].annotate(s="",
xy=(x_line, PSD_sub9[0]),
xytext=(x_line, PSD_sub9.min()),
arrowprops=dict(arrowstyle="|-|,widthA=.5,widthB=.5",
lw=1.3), ha="center")
ax[1, 0].text(s=rf"$\Delta PSD\approx 10^{{{ord_magn9}}}$", x=55,
y=np.sqrt(PSD_sub9[0]*PSD_sub9[-1]), va="center", fontsize=8)
xlim5 = ax[0, 0].get_xlim()
xlim9 = ax[1, 0].get_xlim()
ax[0, 0].set(xlabel=None, ylabel="A.U. Voxel Data", xlim=(-50, xlim5[1]))
ax[1, 0].set(xlabel=None, ylabel=None, xlim=(-50, xlim9[1]))
ax[1, 0].set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]")
ax[1, 1].set(xlabel="Frequency [Hz]", ylabel=None, ylim=ylim_b)
ax[0, 0].text(s="a", **panel_labels, transform=ax[0, 0].transAxes)
ax[1, 0].text(s="b", **panel_labels, transform=ax[1, 0].transAxes)
plt.tight_layout()
plt.savefig(fig_path + "Fig8.pdf", bbox_inches="tight")
plt.savefig(fig_path + "Fig8.png", dpi=1000, bbox_inches="tight")
plt.show()
```
# Appendix: Figure for presentation
```
fig, ax = plt.subplots(2, 2, figsize=(fig_width, 5), sharey="row")
ax[0, 0].text(s=' "Easy" spectrum', **panel_description,
transform=ax[0, 0].transAxes)
ax[1, 0].text(s=' "Hard" spectrum', **panel_description,
transform=ax[1, 0].transAxes)
# lin
ax[0, 0].semilogy(*PSD_plot_sub5, label="Sub 5 MEG") # + ch5)
ax[1, 0].semilogy(*PSD_plot_sub9, label="Sub 9 LFP") # + ch9)
# log
ax[0, 1].loglog(*PSD_plot_sub5, label="Sub 5 MEG")
ax[1, 1].loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax[0, 1].loglog(*fooof_plot_sub5,
label=rf"FOOOF $\beta=${exponent_sub5:.2f}")
ax[1, 1].loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax[0, 1].loglog(*straight_plot_sub5,
label=rf"straight $\beta=${exponent_sub5_straight:.2f}")
ax[1, 1].loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax[0, 1].loglog(*irasa_plot_sub5,
label=rf"IRASA $\beta=${exponent_irasa_sub5:.2f}")
ax[1, 1].loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
for axes in ax.flatten():
axes.spines["top"].set_visible(False)
axes.spines["right"].set_visible(False)
# Legend
handles, labels = ax[0, 1].get_legend_handles_labels()
ax[0, 0].legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9)
handles, labels = ax[1, 1].get_legend_handles_labels()
ax[1, 0].legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9)
############################################
# ## Challenge 1:
# # Add Plateau rectangle
# ylim_b = (5e-3, 6)
# xlim_b = ax[1, 0].get_xlim()
# noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
# rec_xy = (noise_start, ylim_b[0])
# rec_width = freq[-1] - noise_start
# rec_height = np.diff(ylim_b)[0]
# rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
# alpha=.15, color="r")
# ax[1, 1].add_patch(plt.Rectangle(**rect_c))
# # Add Plateau annotation
# ax[1, 1].hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
# linewidth=1)
# ax[1, 1].annotate(s="Early\nPlateau\nonset",
# xy=(noise_start, PSD_sub9[noise_start]),
# xytext=(noise_start, PSD_sub9[noise_start]*20),
# arrowprops=dict(arrowstyle="->", shrinkB=5),
# color="k", fontsize=8,
# ha="left",
# verticalalignment="center")
############################################
############################################
# # Challenge 2:
# # Add indication of peak overlap as vertical arrow
# overlap = 15
# arr_height = 1
# ax[1, 1].annotate(s="", xy=(overlap, PSD_sub9[overlap]),
# xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].annotate(s="", xy=(center_freq_sub9_1, arr_height),
# xytext=(center_freq_sub9_2, arr_height),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
# va="top", fontsize=8)
############################################
############################################
# # Challenge 3:
# ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
# va="center", fontsize=8)
# # Add Peak width annotation
# height1 = 100
# xmin1 = center_freq_sub5_1 - peak_width_sub5_1
# xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# # Add Peak width annotation
# height1 = .029
# height2 = 0.009
# xmin1 = center_freq_sub9_1 - peak_width_sub9_1
# xmax1 = center_freq_sub9_1 + peak_width_sub9_1
# xmin2 = center_freq_sub9_2 - peak_width_sub9_2
# xmax2 = center_freq_sub9_2 + peak_width_sub9_2
# annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
# annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
############################################
# Annotate orders of magnitude
diff5 = PSD_sub5[0] / PSD_sub5.min()
ord_magn5 = int(np.round(np.log10(diff5)))
x_line = -25
ax[0, 0].annotate(s="",
xy=(x_line, PSD_sub5[0]),
xytext=(x_line, PSD_sub5.min()),
arrowprops=dict(arrowstyle="|-|,widthA=.5,widthB=.5",
lw=1.3),
ha="center")
ax[0, 0].text(s=rf"$\Delta PSD\approx 10^{{{ord_magn5}}}$", x=30,
y=np.sqrt(PSD_sub5[0]*PSD_sub5[-1]), va="center", fontsize=8)
diff9 = PSD_sub9[0] / PSD_sub9.min()
ord_magn9 = int(np.round(np.log10(diff9)))
x_line = -25
ax[1, 0].annotate(s="",
xy=(x_line, PSD_sub9[0]),
xytext=(x_line, PSD_sub9.min()),
arrowprops=dict(arrowstyle="|-|,widthA=.5,widthB=.5",
lw=1.3), ha="center")
ax[1, 0].text(s=rf"$\Delta PSD\approx 10^{{{ord_magn9}}}$", x=55,
y=np.sqrt(PSD_sub9[0]*PSD_sub9[-1]), va="center", fontsize=8)
xlim5 = ax[0, 0].get_xlim()
xlim9 = ax[1, 0].get_xlim()
ax[0, 0].set(xlabel=None, ylabel="A.U. Voxel Data", xlim=(-50, xlim5[1]))
ax[1, 0].set(xlabel=None, ylabel=None, xlim=(-50, xlim9[1]))
ax[1, 0].set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]")
ax[1, 1].set(xlabel="Frequency [Hz]", ylabel=None, ylim=ylim_b)
ax[0, 0].text(s="a", **panel_labels, transform=ax[0, 0].transAxes)
ax[1, 0].text(s="b", **panel_labels, transform=ax[1, 0].transAxes)
plt.tight_layout()
plt.savefig(fig_path + "Fig8_Ch0.pdf", bbox_inches="tight")
plt.savefig(fig_path + "Fig8_Ch0.png", dpi=1000, bbox_inches="tight")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(fig_width*4/5, 5*2/3), sharey="row")
ax.loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax.loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax.loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax.loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# Legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9, ncol=1,
borderaxespad=0, loc="upper right")
############################################
# ## Challenge 1:
# # Add Plateau rectangle
# ylim_b = (5e-3, 6)
# xlim_b = ax[1, 0].get_xlim()
# noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
# rec_xy = (noise_start, ylim_b[0])
# rec_width = freq[-1] - noise_start
# rec_height = np.diff(ylim_b)[0]
# rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
# alpha=.15, color="r")
# ax[1, 1].add_patch(plt.Rectangle(**rect_c))
# # Add Plateau annotation
# ax[1, 1].hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
# linewidth=1)
# ax[1, 1].annotate(s="Early\nPlateau\nonset",
# xy=(noise_start, PSD_sub9[noise_start]),
# xytext=(noise_start, PSD_sub9[noise_start]*20),
# arrowprops=dict(arrowstyle="->", shrinkB=5),
# color="k", fontsize=8,
# ha="left",
# verticalalignment="center")
############################################
############################################
# # Challenge 2:
# # Add indication of peak overlap as vertical arrow
# overlap = 15
# arr_height = 1
# ax[1, 1].annotate(s="", xy=(overlap, PSD_sub9[overlap]),
# xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].annotate(s="", xy=(center_freq_sub9_1, arr_height),
# xytext=(center_freq_sub9_2, arr_height),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
# va="top", fontsize=8)
############################################
############################################
# # Challenge 3:
# ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
# va="center", fontsize=8)
# # Add Peak width annotation
# height1 = 100
# xmin1 = center_freq_sub5_1 - peak_width_sub5_1
# xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# # Add Peak width annotation
# height1 = .029
# height2 = 0.009
# xmin1 = center_freq_sub9_1 - peak_width_sub9_1
# xmax1 = center_freq_sub9_1 + peak_width_sub9_1
# xmin2 = center_freq_sub9_2 - peak_width_sub9_2
# xmax2 = center_freq_sub9_2 + peak_width_sub9_2
# annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
# annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
############################################
ax.set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]", ylim=ylim_b)
plt.tight_layout()
plt.savefig(fig_path + "Fig8_Ch0.pdf", bbox_inches="tight")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(fig_width*4/5, 5*2/3), sharey="row")
ax.loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax.loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax.loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax.loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# Legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9, ncol=1,
borderaxespad=0, loc="upper right")
############################################
## Challenge 1:
# Add Plateau rectangle
ylim_b = (5e-3, 6)
xlim_b = ax.get_xlim()
noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
rec_xy = (noise_start, ylim_b[0])
rec_width = freq[-1] - noise_start
rec_height = np.diff(ylim_b)[0]
rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
alpha=.15, color="r")
ax.add_patch(plt.Rectangle(**rect_c))
# Add Plateau annotation
ax.hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
linewidth=1)
ax.annotate(s="Early\nPlateau\nonset",
xy=(noise_start, PSD_sub9[noise_start]),
xytext=(noise_start, PSD_sub9[noise_start]*20),
arrowprops=dict(arrowstyle="->", shrinkB=5),
color="k", fontsize=8,
ha="left",
verticalalignment="center")
############################################
############################################
# # Challenge 2:
# # Add indication of peak overlap as vertical arrow
# overlap = 15
# arr_height = 1
# ax[1, 1].annotate(s="", xy=(overlap, PSD_sub9[overlap]),
# xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].annotate(s="", xy=(center_freq_sub9_1, arr_height),
# xytext=(center_freq_sub9_2, arr_height),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
# va="top", fontsize=8)
############################################
############################################
# # Challenge 3:
# ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
# va="center", fontsize=8)
# # Add Peak width annotation
# height1 = 100
# xmin1 = center_freq_sub5_1 - peak_width_sub5_1
# xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# # Add Peak width annotation
# height1 = .029
# height2 = 0.009
# xmin1 = center_freq_sub9_1 - peak_width_sub9_1
# xmax1 = center_freq_sub9_1 + peak_width_sub9_1
# xmin2 = center_freq_sub9_2 - peak_width_sub9_2
# xmax2 = center_freq_sub9_2 + peak_width_sub9_2
# annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
# annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
############################################
ax.set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]", ylim=ylim_b)
plt.tight_layout()
plt.savefig(fig_path + "Fig8_Ch1.pdf", bbox_inches="tight")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(fig_width*4/5, 5*2/3), sharey="row")
ax.loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax.loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax.loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax.loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# Legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9, ncol=1,
borderaxespad=0, loc="upper right")
############################################
## Challenge 1:
# Add Plateau rectangle
ylim_b = (5e-3, 6)
xlim_b = ax.get_xlim()
noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
rec_xy = (noise_start, ylim_b[0])
rec_width = freq[-1] - noise_start
rec_height = np.diff(ylim_b)[0]
rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
alpha=.15, color="r")
ax.add_patch(plt.Rectangle(**rect_c))
# Add Plateau annotation
ax.hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
linewidth=1)
ax.annotate(s="Early\nPlateau\nonset",
xy=(noise_start, PSD_sub9[noise_start]),
xytext=(noise_start, PSD_sub9[noise_start]*20),
arrowprops=dict(arrowstyle="->", shrinkB=5),
color="k", fontsize=8,
ha="left",
verticalalignment="center")
############################################
############################################
# Challenge 2:
# Add indication of peak overlap as vertical arrow
overlap = 15
arr_height = 1
ax.annotate(s="", xy=(overlap, PSD_sub9[overlap]),
xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
arrowprops=dict(arrowstyle="<->"))
ax.annotate(s="", xy=(center_freq_sub9_1, arr_height),
xytext=(center_freq_sub9_2, arr_height),
arrowprops=dict(arrowstyle="<->"))
ax.text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
va="top", fontsize=8)
############################################
############################################
# # Challenge 3:
# ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
# va="center", fontsize=8)
# # Add Peak width annotation
# height1 = 100
# xmin1 = center_freq_sub5_1 - peak_width_sub5_1
# xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# # Add Peak width annotation
# height1 = .029
# height2 = 0.009
# xmin1 = center_freq_sub9_1 - peak_width_sub9_1
# xmax1 = center_freq_sub9_1 + peak_width_sub9_1
# xmin2 = center_freq_sub9_2 - peak_width_sub9_2
# xmax2 = center_freq_sub9_2 + peak_width_sub9_2
# annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
# annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
############################################
ax.set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]", ylim=ylim_b)
plt.tight_layout()
plt.savefig(fig_path + "Fig8_Ch2.pdf", bbox_inches="tight")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(fig_width*4/5, 5*2/3), sharey="row")
ax.loglog(*PSD_plot_sub9, label="Sub 9 LFP")
# Fooof fit
ax.loglog(*fooof_plot_sub9,
label=rf"FOOOF $\beta=${exponent_sub9:.2f}")
# Straight fit
ax.loglog(*straight_plot_sub9,
label=rf"straight $\beta=${exponent_sub9_straight:.2f}")
# Low fit
ax.loglog(*irasa_plot_sub9,
label=rf"IRASA $\beta=${exponent_irasa_sub9:.2f}")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# Legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9, ncol=1,
borderaxespad=0, loc="upper right")
############################################
## Challenge 1:
# Add Plateau rectangle
ylim_b = (5e-3, 6)
xlim_b = ax.get_xlim()
noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
rec_xy = (noise_start, ylim_b[0])
rec_width = freq[-1] - noise_start
rec_height = np.diff(ylim_b)[0]
rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
alpha=.15, color="r")
ax.add_patch(plt.Rectangle(**rect_c))
# Add Plateau annotation
ax.hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
linewidth=1)
ax.annotate(s="Early\nPlateau\nonset",
xy=(noise_start, PSD_sub9[noise_start]),
xytext=(noise_start, PSD_sub9[noise_start]*20),
arrowprops=dict(arrowstyle="->", shrinkB=5),
color="k", fontsize=8,
ha="left",
verticalalignment="center")
############################################
############################################
# Challenge 2:
# Add indication of peak overlap as vertical arrow
overlap = 15
arr_height = 1
ax.annotate(s="", xy=(overlap, PSD_sub9[overlap]),
xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
arrowprops=dict(arrowstyle="<->"))
ax.annotate(s="", xy=(center_freq_sub9_1, arr_height),
xytext=(center_freq_sub9_2, arr_height),
arrowprops=dict(arrowstyle="<->"))
ax.text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
va="top", fontsize=8)
############################################
############################################
# Challenge 3:
ax.text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
va="center", fontsize=8)
# Add Peak width annotation
height1 = 100
xmin1 = center_freq_sub5_1 - peak_width_sub5_1
xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# Add Peak width annotation
height1 = .029
height2 = 0.009
xmin1 = center_freq_sub9_1 - peak_width_sub9_1
xmax1 = center_freq_sub9_1 + peak_width_sub9_1
xmin2 = center_freq_sub9_2 - peak_width_sub9_2
xmax2 = center_freq_sub9_2 + peak_width_sub9_2
annotate_range(ax, xmin1, xmax1, height1, annotate_pos=.93)
annotate_range(ax, xmin2, xmax2, height2, annotate_pos=.93)
############################################
ax.set(xlabel="Frequency [Hz]", ylabel=r"PSD [$\mu$$V^2/Hz$]", ylim=ylim_b)
plt.tight_layout()
plt.savefig(fig_path + "Fig8_Ch3.pdf", bbox_inches="tight")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(fig_width*4/5, 5*2/3), sharey="row")
# log
ax.loglog(*PSD_plot_sub5, label="Sub 5 MEG")
# Fooof fit
ax.loglog(*fooof_plot_sub5,
label=rf"FOOOF $\beta=${exponent_sub5:.2f}")
# Straight fit
ax.loglog(*straight_plot_sub5,
label=rf"straight $\beta=${exponent_sub5_straight:.2f}")
# Low fit
ax.loglog(*irasa_plot_sub5,
label=rf"IRASA $\beta=${exponent_irasa_sub5:.2f}")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# Legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, fontsize=legend_fontsize, handlelength=1.9)
############################################
# ## Challenge 1:
# # Add Plateau rectangle
# ylim_b = (5e-3, 6)
# xlim_b = ax[1, 0].get_xlim()
# noise_start = detect_plateau_onset(freq, PSD_sub9, 50)
# rec_xy = (noise_start, ylim_b[0])
# rec_width = freq[-1] - noise_start
# rec_height = np.diff(ylim_b)[0]
# rect_c = dict(xy=rec_xy, width=rec_width, height=rec_height,
# alpha=.15, color="r")
# ax[1, 1].add_patch(plt.Rectangle(**rect_c))
# # Add Plateau annotation
# ax[1, 1].hlines(PSD_sub9[noise_start], noise_start, freq[-1], color="k",
# linewidth=1)
# ax[1, 1].annotate(s="Early\nPlateau\nonset",
# xy=(noise_start, PSD_sub9[noise_start]),
# xytext=(noise_start, PSD_sub9[noise_start]*20),
# arrowprops=dict(arrowstyle="->", shrinkB=5),
# color="k", fontsize=8,
# ha="left",
# verticalalignment="center")
############################################
############################################
# # Challenge 2:
# # Add indication of peak overlap as vertical arrow
# overlap = 15
# arr_height = 1
# ax[1, 1].annotate(s="", xy=(overlap, PSD_sub9[overlap]),
# xytext=(overlap, 10**ap_straight_fit_sub9[overlap]),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].annotate(s="", xy=(center_freq_sub9_1, arr_height),
# xytext=(center_freq_sub9_2, arr_height),
# arrowprops=dict(arrowstyle="<->"))
# ax[1, 1].text(s="Peak\nOverlap", x=overlap, y=arr_height*.9, ha="left",
# va="top", fontsize=8)
############################################
############################################
# # Challenge 3:
# ax[1, 1].text(s="Broad\nPeak\nWidths:", x=1, y=(height1+height2)/2, ha="left",
# va="center", fontsize=8)
# # Add Peak width annotation
# height1 = 100
# xmin1 = center_freq_sub5_1 - peak_width_sub5_1
# xmax1 = center_freq_sub5_1 + peak_width_sub5_1
# annotate_range(ax[0, 1], xmin1, xmax1, height1, annotate_pos="left")
# # Add Peak width annotation
# height1 = .029
# height2 = 0.009
# xmin1 = center_freq_sub9_1 - peak_width_sub9_1
# xmax1 = center_freq_sub9_1 + peak_width_sub9_1
# xmin2 = center_freq_sub9_2 - peak_width_sub9_2
# xmax2 = center_freq_sub9_2 + peak_width_sub9_2
# annotate_range(ax[1, 1], xmin1, xmax1, height1, annotate_pos=.93)
# annotate_range(ax[1, 1], xmin2, xmax2, height2, annotate_pos=.93)
############################################
ax.set(xlabel="Frequency [Hz]", ylabel="A.U. Voxel Data")
plt.tight_layout()
plt.savefig(fig_path + "Fig8_MEG.pdf", bbox_inches="tight")
plt.show()
```
| github_jupyter |
# Quantum Key Distribution
Quantum key distribution is the process of distributing cryptographic keys between parties using quantum methods. Due to the unique properties of quantum information compared to classical, the security of a key can be guarunteed (as any unwelcomed measurement would change the state of quantum information transmitted).
In this file, we see the use of SeQUeNCe to simulate quantum key distribution between two adjacent nodes. The first example demonstrates key distribution alone (using the BB84 protocol), while the second example demonstrates additional error correction with the cascade protocol. The network topology, including hardware components, is shown below:
<img src="./notebook_images/QKD_topo.png" width="500"/>
## Example 1: Only BB84
### Import
We must first import the necessary tools from SeQUeNCe to run our simulations.
- `Timeline` is the main simulation tool, providing an interface for the discrete-event simulation kernel.
- `QKDNode` provides a ready-to-use quantum node for quantum key distribution, including necessary hardware and protocol implementations.
- `QuantumChannel` and `ClassicalChannel` are communication links between quantum nodes, providing models of optical fibers.
- The `pair_bb84_protocols` function is used to explicitly pair 2 node instances for key distribution, and establishes one node as the sender "Alice" and one as the receiver "Bob".
```
from ipywidgets import interact
from matplotlib import pyplot as plt
import time
from sequence.kernel.timeline import Timeline
from sequence.topology.node import QKDNode
from sequence.components.optical_channel import QuantumChannel, ClassicalChannel
from sequence.qkd.BB84 import pair_bb84_protocols
```
### Control and Collecting Metrics
Several elements of SeQUeNCe automatically collect simple metrics. This includes the BB84 protocol implementation, which collects key error rates, throughput, and latency. For custom or more advanced metrics, custom code may need to be written and applied. See the documentation for a list of metrics provided by default for each simulation tool.
Here, we create a `KeyManager` class to collect a custom metric (in this case, simply collect all of the generated keys and their generation time) and to provide an interface for the BB84 Protocol. To achieve this, we use the `push` and `pop` functions provided by the protocol stack on QKD nodes. `push` is used to send information down the stack (from the key manager to BB84 in this example) while `pop` is used to send information upwards (from BB84 to the key manager). Different protocols may use these interfaces for different data but only BB84 is shown in this example.
```
class KeyManager():
def __init__(self, timeline, keysize, num_keys):
self.timeline = timeline
self.lower_protocols = []
self.keysize = keysize
self.num_keys = num_keys
self.keys = []
self.times = []
def send_request(self):
for p in self.lower_protocols:
p.push(self.keysize, self.num_keys) # interface for BB84 to generate key
def pop(self, info): # interface for BB84 to return generated keys
self.keys.append(info)
self.times.append(self.timeline.now() * 1e-9)
```
### Building the Simulation
We are now ready to build the simulation itself. This example follows the usual process to ensure that all tools function properly:
1. Create the timeline for the simulation
2. Create the simulated network topology (here this is done explicitly, but this may also be handled by functions of the `Topology` class under `sequence.topology.topology`)
3. Instantiate custom protocols and ensure all protocols are set up (paired) properly (if necessary)
4. Initialize and run the simulation
5. Collect and display the desired metrics
```
def test(sim_time, keysize):
"""
sim_time: duration of simulation time (ms)
keysize: size of generated secure key (bits)
"""
# begin by defining the simulation timeline with the correct simulation time
tl = Timeline(sim_time * 1e9)
tl.seed(0)
# Here, we create nodes for the network (QKD nodes for key distribution)
# stack_size=1 indicates that only the BB84 protocol should be included
n1 = QKDNode("n1", tl, stack_size=1)
n2 = QKDNode("n2", tl, stack_size=1)
pair_bb84_protocols(n1.protocol_stack[0], n2.protocol_stack[0])
# connect the nodes and set parameters for the fibers
# note that channels are one-way
# construct a classical communication channel
# (with arguments for the channel name, timeline, and length (in m))
cc0 = ClassicalChannel("cc_n1_n2", tl, distance=1e3)
cc1 = ClassicalChannel("cc_n2_n1", tl, distance=1e3)
cc0.set_ends(n1, n2)
cc1.set_ends(n2, n1)
# construct a quantum communication channel
# (with arguments for the channel name, timeline, attenuation (in dB/km), and distance (in m))
qc0 = QuantumChannel("qc_n1_n2", tl, attenuation=1e-5, distance=1e3,
polarization_fidelity=0.97)
qc1 = QuantumChannel("qc_n2_n1", tl, attenuation=1e-5, distance=1e3,
polarization_fidelity=0.97)
qc0.set_ends(n1, n2)
qc1.set_ends(n2, n1)
# instantiate our written keysize protocol
km1 = KeyManager(tl, keysize, 25)
km1.lower_protocols.append(n1.protocol_stack[0])
n1.protocol_stack[0].upper_protocols.append(km1)
km2 = KeyManager(tl, keysize, 25)
km2.lower_protocols.append(n2.protocol_stack[0])
n2.protocol_stack[0].upper_protocols.append(km2)
# start simulation and record timing
tl.init()
km1.send_request()
tick = time.time()
tl.run()
print("execution time %.2f sec" % (time.time() - tick))
# display our collected metrics
plt.plot(km1.times, range(1, len(km1.keys) + 1), marker="o")
plt.xlabel("Simulation time (ms)")
plt.ylabel("Number of Completed Keys")
plt.show()
print("key error rates:")
for i, e in enumerate(n1.protocol_stack[0].error_rates):
print("\tkey {}:\t{}%".format(i + 1, e * 100))
```
### Running the Simulation
All that is left is to run the simulation with user input. (maximum execution time: ~5 sec)
Parameters:
sim_time: duration of simulation time (ms)
keysize: size of generated secure key (bits)
```
# Create and run the simulation
interactive_plot = interact(test, sim_time=(100, 1000, 100), keysize=[128, 256, 512])
interactive_plot
```
Due to the imperfect polarization fidelity specified for the optical fiber, we observe that most (if not all) of the completed keys have errors that render them unusable. For this reason, error correction protocols (such as cascade, which is included in SeQUeNCe and shown in the next example) must also be used.
## Example 2: Adding Cascade
This example is simular to the first example, with slight alterations to allow for
- Instatiation of the cascade error correction protocol on the qkd nodes
- Differences in the cascade `push`/`pop` interface compared to BB84
while the network topology remains unchanged.
```
from sequence.qkd.cascade import pair_cascade_protocols
class KeyManager():
def __init__(self, timeline, keysize, num_keys):
self.timeline = timeline
self.lower_protocols = []
self.keysize = keysize
self.num_keys = num_keys
self.keys = []
self.times = []
def send_request(self):
for p in self.lower_protocols:
p.push(self.keysize, self.num_keys) # interface for cascade to generate keys
def pop(self, key): # interface for cascade to return generated keys
self.keys.append(key)
self.times.append(self.timeline.now() * 1e-9)
def test(sim_time, keysize):
"""
sim_time: duration of simulation time (ms)
keysize: size of generated secure key (bits)
"""
# begin by defining the simulation timeline with the correct simulation time
tl = Timeline(sim_time * 1e9)
tl.seed(0)
# Here, we create nodes for the network (QKD nodes for key distribution)
n1 = QKDNode("n1", tl)
n2 = QKDNode("n2", tl)
pair_bb84_protocols(n1.protocol_stack[0], n2.protocol_stack[0])
pair_cascade_protocols(n1.protocol_stack[1], n2.protocol_stack[1])
# connect the nodes and set parameters for the fibers
cc0 = ClassicalChannel("cc_n1_n2", tl, distance=1e3)
cc1 = ClassicalChannel("cc_n2_n1", tl, distance=1e3)
cc0.set_ends(n1, n2)
cc1.set_ends(n2, n1)
qc0 = QuantumChannel("qc_n1_n2", tl, attenuation=1e-5, distance=1e3,
polarization_fidelity=0.97)
qc1 = QuantumChannel("qc_n2_n1", tl, attenuation=1e-5, distance=1e3,
polarization_fidelity=0.97)
qc0.set_ends(n1, n2)
qc1.set_ends(n2, n1)
# instantiate our written keysize protocol
km1 = KeyManager(tl, keysize, 10)
km1.lower_protocols.append(n1.protocol_stack[1])
n1.protocol_stack[1].upper_protocols.append(km1)
km2 = KeyManager(tl, keysize, 10)
km2.lower_protocols.append(n2.protocol_stack[1])
n2.protocol_stack[1].upper_protocols.append(km2)
# start simulation and record timing
tl.init()
km1.send_request()
tick = time.time()
tl.run()
print("execution time %.2f sec" % (time.time() - tick))
# display our collected metrics
plt.plot(km1.times, range(1, len(km1.keys) + 1), marker="o")
plt.xlabel("Simulation time (ms)")
plt.ylabel("Number of Completed Keys")
plt.show()
error_rates = []
for i, key in enumerate(km1.keys):
counter = 0
diff = key ^ km2.keys[i]
for j in range(km1.keysize):
counter += (diff >> j) & 1
error_rates.append(counter)
print("key error rates:")
for i, e in enumerate(error_rates):
print("\tkey {}:\t{}%".format(i + 1, e * 100))
```
### Running the Simulation
We can now run the cascade simulation with user input. Note that the extra steps required by the cascade protocol may cause the simulation to run much longer than the example with only BB84.
Parameters:
sim_time: duration of simulation time (ms)
keysize: size of generated secure key (bits)
The maximum execution time (`sim_time=1000`, `keysize=512`) is around 60 seconds.
```
# Create and run the simulation
interactive_plot = interact(test, sim_time=(100, 1000, 100), keysize=[128, 256, 512])
interactive_plot
```
### Results
The implementation of the cascade protocol found within SeQUeNCe relies on the creation of a large sequence of bits, from which exerpts are used to create individual keys. Due to this behavior, keys are generated in large numbers in regularly spaced "batches". Also note that after application of error correction, the error rates for all keys are now at 0%.
| github_jupyter |
When you run a notebook cell that imports from the whendo packages, you need to set the NOTEBOOK FILE ROOT to \$\{workspaceFolder\} so that imports from top-level packages work. For example, with the property's default value of \$\{fileDirName\}, the imports from dispatcher.etc... will fail since the notebook location is one directory down from top-level.
```
from datetime import time, datetime, timedelta
from pydantic import BaseModel
import requests
import json
from itertools import product
from whendo.core.server import Server
from whendo.core.scheduler import Immediately
import whendo.core.schedulers.timed_scheduler as sched_x
import whendo.core.actions.file_action as file_x
import whendo.core.actions.list_action as list_x
import whendo.core.actions.sys_action as sys_x
import whendo.core.actions.dispatch_action as disp_x
from whendo.core.programs.simple_program import PBEProgram
from whendo.core.util import PP, TimeUnit, Dirs, DateTime, Now, DateTime2, Rez
from whendo.core.action import RezDict, Action, ActionRez
import whendo.core.resolver as resolver
def get(server:Server, path:str):
response = requests.get(cmd(server, path))
return handle_response(response)
def put(server:Server, path:str, data:BaseModel):
response = requests.put(cmd(server, path), data.json())
return handle_response(response)
def post(server:Server, path:str, data:BaseModel):
response = requests.post(cmd(server, path), data.json())
return handle_response(response)
def post_dict(server:Server, path:str, data:dict):
response = requests.post(cmd(server, path), json.dumps(data))
return handle_response(response)
def delete(server:Server, path:str):
response = requests.delete(cmd(server, path))
return handle_response(response)
def cmd(server:Server,path:str):
return f"http://{server.host}:{server.port}{path}"
def handle_response(response):
if response.ok:
PP.pprint(response.json())
return response.json()
else:
raise Exception(response.json())
home0 = Server(host='127.0.0.1', port = 8000, tags = {"server_name":["home0"], 'role': ['usual', 'sweet']})
home1 = Server(host='192.168.0.26', port = 8001, tags = {"server_name":["home1"], 'role': ['usual', 'sour']})
servers = [home0, home1]
def inventory(server:Server):
heart_1 = file_x.FileAppend(file="heartbeat1.txt", payload={'words':'heartbreak hotel'})
post(server, '/actions/heartbeat1', heart_1)
heart_2 = file_x.FileAppend(file="heartbeat2.txt", payload={'words':'nothing but heartaches'})
post(server, '/actions/heartbeat2', heart_2)
heart_3 = file_x.FileAppend(file="heartbeat3.txt", payload={'words':'heart of glass'})
post(server, '/actions/heartbeat3', heart_3)
append_1 = file_x.FileAppend(file="append_1.txt")
post(server, '/actions/append_1', append_1)
append_2 = file_x.FileAppend(file="append_2.txt")
post(server, '/actions/append_2', append_2)
append_3 = file_x.FileAppend(file="append_3.txt")
post(server, '/actions/append_3', append_3)
sys_info = sys_x.SysInfo()
post(server, '/actions/sys_info', sys_info)
pause = sys_x.Pause()
post(server, '/actions/pause', pause)
logic_1 = list_x.All(actions=[heart_1, heart_2])
post(server, '/actions/logic1', logic_1)
success = list_x.Success()
post(server, '/actions/success', success)
file_append = file_x.FileAppend(file="boomerang.txt")
post(server, '/actions/file_append', file_append)
mini_info = sys_x.MiniInfo()
post(server, '/actions/mini_info', mini_info)
scheduling_info = disp_x.SchedulingInfo()
post(server, '/actions/scheduling_info', scheduling_info)
dispatcher_dump = disp_x.DispatcherDump()
post(server, '/actions/dispatcher_dump', dispatcher_dump)
terminate = list_x.Terminate()
post(server, '/actions/terminate', terminate)
raise_if_0 = list_x.RaiseCmp(cmp=0, value=0)
post(server, '/actions/raise_if_0', raise_if_0)
integer = list_x.Result(value=1)
info_append_1 = list_x.All(actions=[sys_info, mini_info, list_x.RezFmt(), append_1])
info_append_2 = list_x.All(actions=[mini_info, list_x.RezFmt(), append_2])
info_append_3 = list_x.All(actions=[dispatcher_dump, sys_info, list_x.RezFmt(), append_3])
post(server, '/actions/info_append_1', info_append_1)
post(server, '/actions/info_append_2', info_append_2)
post(server, '/actions/info_append_3', info_append_3)
raise_all_1 = list_x.All(actions=[list_x.Result(value=0), raise_if_0])
raise_all_2= list_x.All(actions=[list_x.Result(value=1), raise_if_0])
post(server, '/actions/raise_all_1', raise_all_1)
post(server, '/actions/raise_all_2', raise_all_2)
raise_uf_1 = list_x.UntilFailure(actions=[list_x.Result(value=0), raise_if_0])
raise_uf_2= list_x.UntilFailure(actions=[list_x.Result(value=1), raise_if_0])
post(server, '/actions/raise_uf_1', raise_uf_1)
post(server, '/actions/raise_uf_2', raise_uf_2)
raise_us_1 = list_x.UntilSuccess(actions=[list_x.Result(value=0), raise_if_0])
raise_us_2 = list_x.UntilSuccess(actions=[list_x.Result(value=1), raise_if_0])
post(server, '/actions/raise_us_1', raise_us_1)
post(server, '/actions/raise_us_2', raise_us_2)
format_1 = list_x.All(actions = [mini_info, sys_info, list_x.RezFmt()])
post(server, '/actions/format_1', format_1)
execute_action = disp_x.Exec(server_name="home", action_name="file_append")
post(server, '/actions/execute_action', execute_action)
execute_action_key_tags = disp_x.ExecKeyTags(key_tags={"role":["sour"]}, action_name="file_append")
# execute_action_key_tags = disp_x.ExecKeyTags()
post(server, '/actions/execute_action_key_tags', execute_action_key_tags)
sys_info_key_tags = list_x.All(actions=[sys_info, execute_action_key_tags])
post(server, '/actions/sys_info_key_tags', sys_info_key_tags)
scheduler = sched_x.Randomly(time_unit=TimeUnit.second, low=5, high=10)
post(server, '/schedulers/randomly_soon', scheduler)
scheduler = sched_x.Timely(interval=1)
post(server, '/schedulers/often', scheduler)
morning, evening = time(6,0,0), time(18,0,0)
scheduler = sched_x.Timely(interval=1, start=morning, stop=evening)
post(server, '/schedulers/timely_day', scheduler)
scheduler = sched_x.Timely(interval=1, start=evening, stop=morning)
post(server, '/schedulers/timely_night', scheduler)
scheduler = Immediately()
post(server, '/schedulers/immediately', scheduler)
program = PBEProgram().prologue("heartbeat1").epilogue("heartbeat3").body_element("often", "heartbeat2")
post(server, '/programs/program1', program)
info_append = PBEProgram().prologue("info_append_1").epilogue("info_append_3").body_element("often", "info_append_2")
post(server, '/programs/info_append', info_append)
def schedule(server:Server):
if True:
start = Now.dt()+timedelta(seconds=2)
stop = start + timedelta(seconds=20)
datetime2 = DateTime2(dt1=start, dt2=stop)
post(server, "/programs/info_append/schedule", datetime2)
elif True:
start = Now.dt()
stop = start + timedelta(seconds=20)
datetime2 = DateTime2(dt1=start, dt2=stop)
post(server, "/programs/program1/schedule", datetime2)
elif True:
get(server, '/schedulers/often/actions/logic1')
dt = DateTime(date_time=Now.dt()+timedelta(seconds=10))
post(server, '/schedulers/often/actions/logic1/expire', dt)
post(server, '/schedulers/often/actions/heartbeat3/defer', dt)
elif True: # write once to heartbeat1 & heartbeat2
dt = DateTime(date_time=Now.dt()+timedelta(seconds=10))
post(server, '/schedulers/immediately/actions/logic1/defer', dt)
[get(server, '/dispatcher/clear') for server in servers]
for (serverA, serverB) in list(product(servers, servers)):
post(serverA, f"/servers/{serverB.tags['server_name'][0]}", serverB)
[inventory(server) for server in servers]
[schedule(server) for server in [home0, home1]]
[get(server, '/dispatcher/load') for server in [home0, home1]]
[get(server, '/actions/sys_info/execute') for server in [home0, home1]]
[get(server, '/dispatcher/describe_all') for server in [home0, home1]]
[get(server, '/actions/sys_info/execute') for server in [home0, home1]]
[get(server, '/servers') for server in [home0, home1]]
[get(server, '/jobs/run') for server in [home0, home1]]
[get(server, '/jobs/stop') for server in [home0, home1]]
[get(server, '/jobs/count') for server in [home0, home1]]
get(home1, '/actions/sys_info_key_tags/execute')
get(home1, '/actions/execute_action_key_tags/execute')
execute_action = disp_x.Exec(server_name="home1", action_name="file_append")
execute_action_key_tags = disp_x.ExecKeyTags(key_tags={"role":["sour"]}, action_name="file_append") if False else disp_x.ExecKeyTags()
for svr in [home0,home1]:
try:
put(svr, '/actions/execute_action', execute_action)
except:
post(svr, '/actions/execute_action', execute_action)
try:
put(svr, '/actions/execute_action_key_tags', execute_action)
except:
post(svr, '/actions/execute_action_key_tags', execute_action)
execute_action = disp_x.Exec(server_name="home0", action_name="file_append")
for svr in servers:
try:
put(svr, '/actions/execute_action', execute_action)
except:
post(svr, '/actions/execute_action', execute_action)
rez = Rez(result="Eureka!")
key_tags = {"role": ["sour"]}
rez_dict = RezDict(rez=rez, dictionary=key_tags)
post(home1, '/servers/by_tags/any/actions/execute_action/execute_with_rez',rez_dict)
############## BWEE
############## BWEE
execute_action_key_tags = disp_x.ExecKeyTags(key_tags={"role":["sweet"]}, action_name="file_append")
for svr in servers:
try:
put(svr, '/actions/execute_action_key_tags', execute_action_key_tags)
except:
post(svr, '/actions/execute_action_key_tags', execute_action_key_tags)
rez = Rez(result="Eureka!")
key_tags = {"role": ["sweet"]} # execution location
rez_dict = RezDict(rez=rez, dictionary=key_tags)
post(home0, '/servers/by_tags/any/actions/execute_action_key_tags/execute_with_rez',rez_dict)
test_action1 = disp_x.ExecKeyTags()
test_action2 = list_x.Vals(vals={"payload":"Guerneville!!!", "action_name":"file_append", "key_tags":{"role":["sweet"]}})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action2,test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
key_tags = {"role": ["sour"]} # top-level execution
post_dict(home0, '/servers/by_tags/any/actions/test_action3/execute', key_tags)
test_action1 = disp_x.ExecKeyTags()
test_action_a = list_x.Result(value="Arcata!")
test_action_b = list_x.Vals(vals={"payload":{"x":"Eureka!!!"}})
test_action2 = list_x.Vals(vals={"action_name":"file_append", "key_tags":{"role":["sweet"]}})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action_b, test_action2, test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
key_tags = {"role": ["sour"]} # top-level execution
post_dict(home0, '/servers/by_tags/any/actions/test_action3/execute', key_tags)
test_action1 = disp_x.ExecKeyTags()
test_action_a = list_x.Result(value="Arcata!")
test_action_b = list_x.Vals(vals={"payload":{"x":"Eureka!!!"}})
test_action2 = list_x.Vals(vals={"action_name":"file_append", "key_tags":{"role":["sweet"]}})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action_b, test_action2, test_action_a, test_action1])
test_action3.json()
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
key_tags = {"role": ["sour"]} # top-level execution
post_dict(home0, '/servers/by_tags/any/actions/test_action3/execute', key_tags)
"XXXXXXXXXXXXXXXX !!!!!!!!!"
file_append = resolver.resolve_action(get(home0, '/actions/file_append'))
test_action1 = disp_x.ExecSuppliedKeyTags(key_tags={"role":["sour"]})
test_action2 = list_x.Vals(vals={"action":file_append, "payload":{"x":"Petrolia!!!"}})
# test_action2 = list_x.Vals(vals={"action":file_append, "key_tags":{"role":["sour"]}, "payload":{"x":"Petrolia!!!"}})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action2, test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
key_tags = {"role": ["sweet"]} # top-level execution
post_dict(home1, '/servers/by_tags/any/actions/test_action3/execute', key_tags)
"YYYYYYYYYYYYYYYYY !!!!!!!!!"
file_append = resolver.resolve_action(get(home0, '/actions/file_append'))
file_append.payload = {"x":"Petrolia!!!"}
test_action1 = disp_x.ExecSupplied() #KeyTags(key_tags={"role":["sour"]})
test_action2 = list_x.Vals(vals={"action":file_append})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action2, test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
get(home0, '/actions/test_action3/execute')
"ZZZZZZZZZZZZ !!!!!!!!!"
file_append = resolver.resolve_action(get(home0, '/actions/file_append'))
file_append.payload = {"x":"Petrolia!!!"}
test_action1 = disp_x.ExecSupplied(action=file_append) #KeyTags(key_tags={"role":["sour"]})
# test_action2 = list_x.Vals(vals={"action":file_append})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
get(home0, '/actions/test_action3/execute')
file_append = resolver.resolve_action(get(home0, '/actions/file_append'))
test_action1 = disp_x.ExecSuppliedKeyTags(action=file_append)
test_action_b = list_x.Vals(vals={"payload":{"x":"Petrolia!!!"}})
test_action2 = list_x.Vals(vals={"key_tags":{"role":["sour"]}})
test_action3 = list_x.All(include_processing_info=True, actions=[test_action_b, test_action2, test_action1])
for svr in servers:
try:
put(svr, '/actions/test_action3', test_action3)
except:
post(svr, '/actions/test_action3', test_action3)
key_tags = {"role": ["sweet"]} # top-level execution
post_dict(home1, '/servers/by_tags/any/actions/test_action3/execute', key_tags)
from whendo.sdk.client import Client
cl = Client(host=home0.host, port=home0.port)
schedule_key_tags2 = disp_x.ExecSuppliedKeyTags(key_tags={"role":["sour"]}, action = disp_x.ScheduleProgram(program_name="info_append"))
cl.set_action(action_name="schedule_key_tags2", action=schedule_key_tags2)
rez=Rez(flds={"start_stop": DateTime2(dt1=Now.dt() + timedelta(seconds=3), dt2= Now.dt() + timedelta(seconds=45))})
cl.execute_action_with_rez(action_name="schedule_key_tags2", rez=rez)
from whendo.core.resolver import resolve_action_rez
dd = {'dt1': datetime(2021, 5, 1, 2, 6, 40, 932219), 'dt2': datetime(2021, 5, 1, 2, 7, 22, 932301)}
resolve_action_rez(dictionary=dd)
import whendo.core.util as util
file = "/Users/electronhead/.whendo/whendo/output/foo.txt"
with open(file, "a") as outfile:
util.PP.pprint(False, stream=outfile)
outfile.write("\n")
from whendo.sdk.client import Client
action = file_x.FileAppend(file="test.txt")
action2 = list_x.All(actions=[list_x.Result(value=False), action])
Client(host=home0.host, port=home0.port).execute_supplied_action(supplied_action=action2)
rez = resolver.resolve_rez(get(home0, '/actions/sys_info/execute'))
print (type(rez))
from whendo.sdk.client import Client
Client(host=home0.host, port=home0.port).get_servers_by_tags(key_tags={"server_name":["home0"]}, mode="any")
# key_tags={"server_name":["home0"]}
# post_dict(home1, "/servers/by_tags/any", key_tags)
from whendo.sdk.client import Client
PP.pprint(Client(host=home0.host, port=home0.port).load_dispatcher().dict())
get(home1, '/actions/scheduling_info/execute')
import logging
logger = logging.getLogger(__name__)
type(logger)
```
| github_jupyter |
```
import igraph as ig
import numpy as np
from sklearn.metrics import adjusted_rand_score as ARI
from sklearn.metrics import adjusted_mutual_info_score as AMI
from sklearn.metrics import normalized_mutual_info_score as NMI
import scipy.stats as ss
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
def community_ecg(self, weights=None, ens_size=16, min_weight=0.05):
W = [0]*self.ecount()
## Ensemble of level-1 Louvain
for i in range(ens_size):
p = np.random.permutation(self.vcount()).tolist()
g = self.permute_vertices(p)
l = g.community_multilevel(weights=weights, return_levels=True)[0].membership
b = [l[p[x.tuple[0]]]==l[p[x.tuple[1]]] for x in self.es]
W = [W[i]+b[i] for i in range(len(W))]
W = [min_weight + (1-min_weight)*W[i]/ens_size for i in range(len(W))]
part = self.community_multilevel(weights=W)
## Force min_weight outside 2-core
core = self.shell_index()
ecore = [min(core[x.tuple[0]],core[x.tuple[1]]) for x in self.es]
part.W = [W[i] if ecore[i]>1 else min_weight for i in range(len(ecore))]
part.CSI = 1-2*np.sum([min(1-i,i) for i in part.W])/len(part.W)
return part
ig.Graph.community_ecg = community_ecg
def readGraph(fn, directed=False):
g = ig.Graph.Read_Ncol(fn+'.edgelist',directed=directed)
c = np.loadtxt(fn+'.community',dtype='uint8')
node_base = min([int(x['name']) for x in g.vs]) ## graphs have 1-based or 0-based nodes
comm_base = min(c) ## same for communities
comm = [c[int(x['name'])-node_base]-comm_base for x in g.vs]
g.vs['community'] = comm
g.vs['shape'] = 'circle'
pal = ig.RainbowPalette(n=max(comm)+1)
g.vs['color'] = [pal.get(int(i)) for i in comm]
g.vs['size'] = 10
g.es['width'] = 1
return g
def edgeLabels(g, gcomm):
x = [(gcomm[x.tuple[0]]==gcomm[x.tuple[1]]) for x in g.es]
return x
def AGRI(g, u, v):
bu = edgeLabels(g, u)
bv = edgeLabels(g, v)
su = np.sum(bu)
sv = np.sum(bv)
suv = np.sum(np.array(bu)*np.array(bv))
m = len(bu)
return((suv-su*sv/m) / (0.5*(su+sv)- su*sv/m))
#return suv/(0.5*(su+sv))
```
## Compare ECG, ML, IM over large LFR Graph
```
## read large noisy LFR graph (with mu=.48, n=8916)
g = readGraph('Data/LFR8916/lfr8916')
g = g.simplify()
ec = g.community_ecg()
im = g.community_infomap()
ml = g.community_multilevel()
print('Adjusted RAND Index')
print('ML:',ARI(g.vs['community'],ml.membership))
print('ECG:',ARI(g.vs['community'],ec.membership))
print('IM:',ARI(g.vs['community'],im.membership))
print('Adjusted Graph-aware RAND Index')
print('ML:',AGRI(g,g.vs['community'],ml.membership))
print('ECG:',AGRI(g,g.vs['community'],ec.membership))
print('IM:',AGRI(g,g.vs['community'],im.membership))
## Number of clusters found
print('number of communities:',max(g.vs['community'])+1)
print('with ML:',max(ml.membership)+1)
print('with ECG:',max(ec.membership)+1)
print('with IM:',max(im.membership)+1)
## Real vs empirical 'noise' levels
m = g.vs['community']
m = ml.membership
#m = ec.membership
#m = im.membership
## compute 'mu' (prop. of external edges)
cnt = 0
for e in g.es:
if m[e.tuple[0]] != m[e.tuple[1]]:
cnt+=1
print('noise (mu):',cnt/g.ecount())
```
## Re-visit the ring of clique problem
```
import itertools
## ring of cliques igraph with n cliques of size m with e edges between contiguous cliques
def ringOfCliques(n=24, m=5, e=1):
size = n*m
g = ig.Graph()
for i in range(size):
g.add_vertex(str(i))
## ring of cliques
for i in range(0, size, m):
## cliques
for j in range(i,i+m-1,1):
for k in range(j+1,i+m,1):
g.add_edge(str(j),str(k),type='intra')
## ring
if i>0:
## all pairs (i,i+1..i+m-1) and (i-m,i-m+1..i-m+m-1)
a = np.arange(i,i+m,1)
b = np.arange(i-m,i,1)
else:
a = np.arange(0,m,1)
b = np.arange(size-m,size,1)
## all 2-ples: pick e
l = list(itertools.product(a,b))
arr = np.empty(len(l), dtype='O')
arr[:] = l
x = np.random.choice(arr,size=e,replace=False)
for j in x:
g.add_edge(str(j[0]),str(j[1]),type='extra')
return(g)
## number of communities: ML vs ECG vs IM
## n 5-cliques for 4 <= n <= 48
## number of linking edges from 1 to 5
N = np.arange(4,49,4) ## number of cliques
ML=[]
IM=[]
EC=[]
REP=10 ## take average over several repeats
for e in range(5): ## number of linking edges
ML.append([])
IM.append([])
EC.append([])
for n in N:
ml=0
im=0
ec=0
for ctr in range(REP):
g = ringOfCliques(n=n, m=5, e=e+1)
ml = ml + max(g.community_multilevel().membership)+1
im = im + max(g.community_infomap().membership)+1
ecg = g.community_ecg(ens_size=32)
ec = ec + max(ecg.membership)+1
ML[e].append(ml/REP)
EC[e].append(ec/REP)
IM[e].append(im/REP)
## Plot the results
fig = plt.figure(1, figsize=(16,4))
with sns.axes_style('whitegrid'):
for e in range(5):
plt.subplot(1,5,e+1)
plt.plot(N,EC[e],'-', c='b',label='ECG')
plt.plot(N,ML[e],'--', c='g',label='ML')
plt.plot(N,IM[e],':', c='r',label='IM')
plt.xlabel('Number of 5-cliques (n)', fontsize=12)
if e==0:
plt.ylabel('Number of communities found', fontsize=12)
plt.legend(fontsize=12)
plt.title(str(e+1)+' linking edge(s)', fontsize=14)
## fix n=4 cliques;
## look at ECG weights for edges internal/external to the cliques
## consider 1 to 15 edges between the cliques
## recall: 5-clique has 10 internal edges
n = 4
EXT = [] ## mean
INT = []
sEXT = [] ## stdv
sINT = []
REP=30
for xe in range(15):
intern=[]
extern=[]
for ctr in range(REP):
g = ringOfCliques(n=n, m=5, e=xe+1)
ecg = g.community_ecg(ens_size=32)
g.es['weight'] = ecg.W
for e in g.es:
if e['type'] == 'intra':
intern.append(e['weight'])
else:
extern.append(e['weight'])
INT.append(np.mean(intern))
EXT.append(np.mean(extern))
sINT.append(np.std(intern))
sEXT.append(np.std(extern))
## Plot the results
xe = np.arange(1,16,1)
fig = plt.figure(1, figsize=(7,5))
with sns.axes_style('whitegrid'):
plt.plot(xe,INT,'-', c='b',label='Internal Edges')
plt.fill_between(xe, [INT[i]-sINT[i] for i in range(len(INT))],
[INT[i]+sINT[i] for i in range(len(INT))],
alpha=.1, facecolor='black')
plt.plot(xe,EXT,'--', c='g',label='Linking Edges')
plt.fill_between(xe, [EXT[i]-sEXT[i] for i in range(len(EXT))],
[EXT[i]+sEXT[i] for i in range(len(EXT))],
alpha=.1, facecolor='black')
plt.xlabel('Number of linking edges between cliques', fontsize=12)
plt.ylabel('ECG Weight', fontsize=12)
plt.legend(fontsize=12)
plt.title('Ring of 4 cliques of size 5', fontsize=14)
## with 'xe' edges between cliques, plot thick edges when ECG weight > Thresh
xe = 15
Thresh = .8
##
size = 5
g = ringOfCliques(n=4, m=size, e=xe)
ecg = g.community_ecg(ens_size=32)
g.es['weight'] = ecg.W
for e in g.es:
if e['type'] == 'intra':
intern.append(e['weight'])
else:
extern.append(e['weight'])
cl = np.repeat('blue',size).tolist()+np.repeat('red',size).tolist()+np.repeat('green',size).tolist()+np.repeat('cyan',size).tolist()
g.vs['color'] = cl
g.es['color'] = 'grey'
g.es['width'] = 1
for e in g.es:
if e['weight'] >= Thresh:
e['color'] = 'black'
e['width'] = 2
ly = g.layout_kamada_kawai()
ig.plot(g, target='roc3.pdf', layout=ly, vertex_size=12, bbox=(0,0,400,400))
```
## Looking at the ECG weights
```
## Try various graphs:
g = readGraph('Data/LFR15/lfr15')
#g = readGraph('Data/LFR35/lfr35')
#g = readGraph('Data/LFR55/lfr55')
#g = ig.Graph.Erdos_Renyi(n=100, m=500)
## Print CSI and plot weight distribution
ec = g.community_ecg()
print('CSI:',ec.CSI)
sns.violinplot(ec.W, inner=None)
plt.xlabel('ECG weight');
```
## Seed expansion with ECG
```
g = readGraph('Data/LFR35/lfr35')
g.vs['size'] = 10
v = 3
Vertex = g.vs[v]['name']
g.vs[v]['size'] = 20
## ego-net
sg = g.induced_subgraph(g.neighborhood(v, order=1))
ig.plot(sg, bbox=(0,0,400,300))
## 2-hops
sg = g.induced_subgraph(g.neighborhood(v, order=2))
ig.plot(sg, bbox=(0,0,400,300))
## Now run ECG and record edge weights
ec = g.community_ecg()
g.es['w'] = ec.W
## only show edges with high weight
Thresh = .7
g.es['width'] = 0
for e in g.es:
if e['w'] > Thresh:
e['width'] = 1
sg = g.induced_subgraph(g.neighborhood(v, order=2))
ig.plot(sg, bbox=(0,0,400,300))
## keep only edges above threshold and connected component with seed
ed = [e for e in sg.es if e['w'] < Thresh]
sg.delete_edges(ed)
## keep connected component with vertex v
sg.vs['cc'] = sg.clusters().membership
v = sg.vs.find(name = Vertex)
cc = v['cc']
vd = [x for x in sg.vs if x['cc'] != cc]
sg.delete_vertices(vd)
## Plot
ig.plot(sg, bbox=(0,0,300,200))
```
| github_jupyter |
# UTS ET3107 Pemrograman Lanjut/Advanced Programming
# Rama Rahardi (18117026/github: ramhdi)
# Fikri Firmansyah Akbar (18115011/github: fikfak)
## Data diambil dari github.com/eueung/pilrek (update 08/10/2019)
Update
(dilansir dari https://bandung.kompas.com/read/2019/10/11/09041361/10-bakal-calon-rektor-itb-ditetapkan-ini-nama-namanya?page=all)
> Majelis Wali Amanat (MWA) ITB telah menentukan 10 bakal calon rektor yang lolos ke tahap selanjutnya, yaitu
1. Benyamin Sapiie
2. Bramantyo Djohanputro
3. Dwi Larso
4. Edy Tri Baskoro
5. Gusti Ayu Putri Saptawati
6. Jaka Sembiring
7. Kadarsyah Suryadi
8. Reini D Wirahadikusumah
9. Togar Mangihut Simatupang
10. Widjaja Martokusumo
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# mengunduh data terbaru dari github Pak Eueung
df1 = pd.read_csv('https://github.com/eueung/pilrek/raw/master/pilrek.csv')
df2 = pd.read_csv('https://github.com/eueung/pilrek/raw/master/pilrek-anon.csv')
df = pd.concat([df1, df2])
df.tail()
```
### Menyaring data supaya data bacarek yang digunakan untuk pengolahan data hanya bacarek yang lolos ke tahap selanjutnya
```
df_lolos = df[df['CaRek Pilihan'].str.contains('Benyamin|'
'Bramantyo|'
'Dwi|'
'Edy|'
'Gusti Ayu|'
'Jaka|'
'Kadarsah|'
'Reini|'
'Simatupang|'
'Martokusumo')]
df_lolos
```
### Menampilkan jumlah pilihan bacarek
```
df_lolos['CaRek Pilihan'].value_counts().plot.bar()
plt.title("Gambar 1: Grafik jumlah pilihan bacarek")
```
## 1. Klasifikasi data berdasarkan lembaga asal bacarek
Berdasarkan gambar 1, terdapat sembilan bacarek yang berasal dari kalangan civitas akademika ITB yaitu:
1. Edy Tri Baskoro (FMIPA-MA)
2. Kadarsah Suryadi (FTI-TI)
3. Benyamin Sapiie (FITB)
4. Dwi Larso (SBM/President Univ)
5. Jaka Sembiring (STEI)
6. Togar M Simatupang (SBM)
7. Gusti Ayu Putri Saptawati S (STEI)
8. Widjaja Martokusumo (SAPPK)
9. Reini D Wirahadikusumah (FTSL)
dan terdapat satu bacarek yang berasal dari luar civitas akademika ITB yaitu Bramantyo (Direktur PPM Manajemen).
Para bacarek yang berasal dari civitas akademika ITB berasal dari fakultas/sekolah FMIPA, FTI, FITB, SBM, STEI, SAPPK, dan FTSL.
Selanjutnya melalui data yang tersedia ditinjau latar belakang asal pemilih yang memilih masing-masing kategori di atas. Terdapat dua kategori yaitu civitas akademika ITB (mahasiswa, dosen, dan pegawai tenaga kependidikan) dan luar ITB (alumni dan umum).
### Persentase latar belakang pekerjaan pemilih bacarek yang berasal dari ITB
```
df_lolos_itb = df_lolos[df_lolos['CaRek Pilihan'].str.contains('FMIPA|'
'FTI|'
'FITB|'
'SBM|'
'STEI|'
'SAPPK|'
'FTSL')]
#df_lolos_itb
ax2 = df_lolos_itb['Kategori Anda'].value_counts().plot(kind='pie',autopct='%1.1f%%')
ax2.set_ylabel('')
plt.title("Gambar 2: Persentase latar belakang pekerjaan pemilih bacarek yang berasal dari ITB")
```
### Persentase latar belakang pekerjaan pemilih bacarek yang berasal dari luar ITB
```
df_lolos_nonitb = df_lolos[df_lolos['CaRek Pilihan'].str.contains('PPM')]
#df_lolos_nonitb
ax3 = df_lolos_nonitb['Kategori Anda'].value_counts().plot(kind='pie',autopct='%1.1f%%')
ax3.set_ylabel('')
plt.title("Gambar 3: Persentase latar belakang pekerjaan pemilih bacarek yang berasal dari luar ITB")
```
### Analisis
Dari gambar 2 tampak bahwa **bacarek yang berasal dari ITB** mendapatkan cukup banyak dukungan dari **kalangan civitas akademika** (dosen, mahasiswa, dan pegawai tenaga kependidikan) dengan total persentase pemilih hingga 48.1%. Dukungan dari luar ITB (alumni dan umum) sedikit lebih banyak dengan total 51.8%, tetapi tidak cukup mendominasi.
Dari gambar 3 tampak bahwa **bacarek yang berasal dari luar ITB** mendapatkan dukungan yang **didominasi oleh kalangan luar ITB** dengan total pemilih hingga 85.7%. Kalangan civitas akademika ITB yang memilih bacarek yang berasal dari luar ITB cukup sedikit, dengan total hanya 14.4%.
## 2. Klasifikasi data berdasarkan latar belakang akademik bacarek
Berdasarkan gambar 1, latar belakang akademik bacarek dapat dikelompokkan sebagai berikut.
1. Sains dan teknologi
- Edy Tri Baskoro (FMIPA-MA)
- Kadarsah Suryadi (FTI-TI)
- Benyamin Sapiie (FITB)
- Jaka Sembiring (STEI)
- Gusti Ayu Putri Saptawati S (STEI)
- Widjaja Martokusumo (SAPPK)
- Reini D Wirahadikusumah (FTSL)
2. Bisnis dan manajemen
- Dwi Larso (SBM/President Univ)
- Togar M Simatupang (SBM)
Selanjutnya ditinjau alasan memilih bacarek dari masing-masing kategori di atas.
### Persentase alasan pemilih yang memilih bacarek dari rumpun saintek
```
df_saintek = df_lolos[df_lolos['CaRek Pilihan'].str.contains('FMIPA|'
'FTI|'
'FITB|'
'STEI|'
'SAPPK|'
'FTSL')]
#df_saintek
ax4 = df_saintek['Alasan Memilih CaRek'].value_counts().plot(kind='pie',autopct='%1.1f%%')
ax4.set_ylabel('')
plt.title("Gambar 4: Persentase alasan pemilih yang memilih bacarek dari rumpun saintek")
```
### Persentase alasan pemilih yang memilih bacarek dari rumpun bisnis-manajemen
```
df_bisman = df_lolos[df_lolos['CaRek Pilihan'].str.contains('SBM')]
#df_bisman
ax5 = df_bisman['Alasan Memilih CaRek'].value_counts().plot(kind='pie',autopct='%1.1f%%')
ax5.set_ylabel('')
plt.title("Gambar 5: Persentase alasan pemilih yang memilih bacarek dari rumpun bisnis-manajemen")
```
### Analisis
Berdasarkan gambar 4, pemilih yang memilih bacarek dari rumpun saintek memiliki lima alasan terbanyak sebagai berikut:
1. Keberhasilan dan prestasi (24.9%)
2. Rektor 4.0 (11.8%)
3. Karakter kepemimpinan dan leadership (11.3%)
4. Kecerdasan dan keberanian utk kemajuan ITB (10.7%)
5. Kapabilitas utk meningkatkan ranking ITB (7.7%)
Berdasarkan gambar 5, pemilih yang memilih bacarek dari rumpun bisnis-manajemen memiliki lima alasan terbanyak sebagai berikut:
1. Kecerdasan dan keberanian utk kemajuan ITB (25.0%)
2. Rektor 4.0 (13.8%)
3. Kapabilitas utk meningkatkan ranking ITB (13.8%)
4. Futuristik dan outside-the-box (12.5%)
5. Keberhasilan dan prestasi (7.5%)
Dari deskripsi di atas dapat dibandingkan perbedaan sifat-sifat yang dimiliki oleh bacarek dari rumpun saintek dan bisnis-manajemen:
- Bacarek dari kedua rumpun tersebut sama-sama dianggap **kekinian (Rektor 4.0)**, **cukup cerdas dan berani untuk memajukan ITB**, dan dianggap **mampu meningkatkan ranking ITB**.
- Pemilih yang memilih bacarek dari rumpun **saintek** paling banyak memilih berdasarkan **pencapaian/prestasi pribadi** bacarek (keberhasilan dan prestasi) dengan persentase mencapai 24.9%.
- Pemilih yang memilih bacarek dari rumpun **bisnis-manajemen** paling banyak memilih berdasarkan **kepribadian** bacarek (kecerdasan dan keberanian) dengan persentase mencapai 25.0%.
- Terdapat alasan yang hanya terdapat pada bacarek yang berasal dari **saintek**, yaitu **karakter kepemimpinan dan leadership** (11.3%).
- Terdapat alasan yang hanya terdapat pada bacarek yang berasal dari **bisnis-manajemen**, yaitu **futuristik dan outside-the-box** (12.5%)
```
timestamp = df_lolos['Timestamp'].str.split(expand=True)
tanggal = timestamp[0].str.split('/',expand=True)
waktu = timestamp[1].str.split(':',expand=True)
print('sample tanggal\n', tanggal.sample(),'\n')
print('sample waktu\n', waktu.sample())
jam = waktu[0].astype(int).value_counts().sort_index()
jam = jam.reindex(np.arange(jam.index[0],jam.index[-1]+1),fill_value=0)
am = jam[:12]
pm = jam[12:]
print(am,'\n',pm)
N = 12
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
width = np.pi / 6
radii = am
ax = plt.subplot(121,projection='polar')
bars = ax.bar(theta, radii, width=width, bottom=0.0, edgecolor='black')
# Use custom colors and opacity
for r, bar in zip(radii, bars):
bar.set_facecolor(plt.cm.viridis(r / radii.max()))
bar.set_alpha(0.8)
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
ax.set_xticks(theta)
ax.set_xticklabels(np.append(12,np.arange(1,N)).flatten())
ax.set_title('am', y =1)
radii = pm
with plt.style.context('dark_background'):
ax = plt.subplot(122,projection='polar')
bars = ax.bar(theta, radii, width=width, bottom=0.0, edgecolor='white')
# Use custom colors and opacity
for r, bar in zip(radii, bars):
bar.set_facecolor(plt.cm.twilight(r / radii.max()))
bar.set_alpha(0.9)
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
ax.set_rlabel_position(-30)
ax.set_xticks(theta)
ax.set_xticklabels(np.append(12,np.arange(1,N)).flatten(),c='black')
ax.set_title('pm',y = 1,c='black')
plt.suptitle('Jumlah Entri Kuisioner carek lolos Berdasarkan Waktu')
plt.tight_layout()
```
| github_jupyter |
# IMDB Dataset - Create Weak Supervision Sources and Get the Weak Data Annotations
This notebook shows how to use keywords as a weak supervision source on the example of a well-known IMDB Movie Review dataset, which targets a binary sentiment analysis task.
The original dataset has gold labels, but we will use these labels only for evaluation purposes, since we want to test models under the weak supervision setting with Knodle. The idea behind it is that you don't have a dataset which is purely labeled with strong supervision (manual) and instead use heuristics (e.g. rules) to obtain a weak labeling. In the following tutorial, we will look for certain keywords expressing positive and negative sentiments that can be helpful to distinguish between classes. Specifically, we use the [Opinion lexicon](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) of the University of Illinois at Chicago.
First, we load the dataset from a Knodle dataset collection. Then, we will create [Snorkel](https://www.snorkel.org/) labeling functions from two sets of keywords and apply them to the IMDB reviews. Please keep in mind, that keyword matching can be done without Snorkel; however, we enjoy the transparent annotation functionality of this library in our tutorial.
Each labeling function (i.e. keyword) will be further associated with a respective target label. This concludes the annotation step.
To estimate how good our weak labeling works on its own, we will use the resulting keyword matches together with a basic majority vote model. Finally, the preprocessed data will be saved in a knodle-friendly format, so that other denoising models can be trained with the IMDB dataset.
The IMDB dataset available in the Knodle collection was downloaded from [Kaggle](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) in January 2021.
## Imports
Lets make some basic imports
```
import os
from tqdm import tqdm
from typing import List
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS, CountVectorizer
from snorkel.labeling import LabelingFunction, PandasLFApplier, filter_unlabeled_dataframe, LFAnalysis
from snorkel.labeling.model import MajorityLabelVoter, LabelModel
# client to access the dataset collection
from minio import Minio
client = Minio("knodle.dm.univie.ac.at", secure=False)
# Init
tqdm.pandas()
pd.set_option('display.max_colwidth', -1)
# Constants for Snorkel labeling functions
POSITIVE = 1
NEGATIVE = 0
ABSTAIN = -1
COLUMN_WITH_TEXT = "reviews_preprocessed"
```
## Download the dataset
```
# define the path to the folder where the data will be stored
data_path = "../../../data_from_minio/imdb"
```
Together with the IMDB data, let us also collect the keywords that we will need later.
```
files = [("IMDB Dataset.csv",), ("keywords", "negative-words.txt"), ("keywords", "positive-words.txt")]
for file in tqdm(files):
client.fget_object(
bucket_name="knodle",
object_name=os.path.join("datasets/imdb", *file),
file_path=os.path.join(data_path, file[-1]),
)
```
## Preview dataset
After downloading and unpacking the dataset we can have a first look at it and work with it.
```
imdb_dataset_raw = pd.read_csv(os.path.join(data_path, "IMDB Dataset.csv"))
imdb_dataset_raw.head(1)
imdb_dataset_raw.groupby('sentiment').count()
imdb_dataset_raw.isna().sum()
```
## Preprocess dataset
Now lets take some basic preprocessing steps
### Remove Stopwords
It could be a reasonable step for some classifiers, but since we use BERT among other approaches, we want to keep the sentence structure and hence do not remove stopwords just yet.
### Remove HTML Tags
The dataset contains many HTML tags. We'll remove them
```
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
imdb_dataset_raw[COLUMN_WITH_TEXT] = imdb_dataset_raw["review"].apply(
lambda x: strip_html(x))
imdb_dataset_raw[COLUMN_WITH_TEXT].head(1)
```
## Keywords
For weak supervision sources we use sentiment keyword lists for positive and negative words.
We have downloaded them from the Knodle collection earlier, with the IMDB dataset.
After parsing the keywords from separate files, they are stored in a pd.DataFrame with the corresponding sentiment as "label".
```
positive_keywords = pd.read_csv(os.path.join(data_path, "positive-words.txt"), sep=" ", header=None, error_bad_lines=False, skiprows=30)
positive_keywords.columns = ["keyword"]
positive_keywords["label"] = "positive"
positive_keywords.head(2)
negative_keywords = pd.read_csv(os.path.join(data_path, "negative-words.txt"),
sep=" ", header=None, error_bad_lines=False, encoding='latin-1', skiprows=30)
negative_keywords.columns = ["keyword"]
negative_keywords["label"] = "negative"
negative_keywords.head(2)
all_keywords = pd.concat([positive_keywords, negative_keywords])
all_keywords.label.value_counts()
# remove overlap of keywords between two sentiments
all_keywords.drop_duplicates('keyword',inplace=True)
all_keywords.label.value_counts()
```
## Labeling Functions
Now we start to build labeling functions with Snorkel with these keywords and check the coverage.
This is an iterative process of course so we surely have to add more keywords and regulary expressions ;-)
```
def keyword_lookup(x, keyword, label):
return label if keyword in x[COLUMN_WITH_TEXT].lower() else ABSTAIN
def make_keyword_lf(keyword: str, label: str) -> LabelingFunction:
"""
Creates labeling function based on keyword.
Args:
keyword: what keyword should be look for
label: what label does this keyword imply
Returns: LabelingFunction object
"""
return LabelingFunction(
name=f"keyword_{keyword}",
f=keyword_lookup,
resources=dict(keyword=keyword, label=label),
)
def create_labeling_functions(keywords: pd.DataFrame) -> np.ndarray:
"""
Create Labeling Functions based on the columns keyword and regex. Appends column lf to df.
Args:
keywords: DataFrame with processed keywords
Returns:
All labeling functions. 1d Array with shape: (number_of_lfs x 1)
"""
keywords = keywords.assign(lf=keywords.progress_apply(
lambda x:make_keyword_lf(x.keyword, x.label_id), axis=1
))
lfs = keywords.lf.values
return lfs
all_keywords["label_id"] = all_keywords.label.map({'positive':POSITIVE, 'negative':NEGATIVE})
all_keywords
labeling_functions = create_labeling_functions(all_keywords)
labeling_functions
```
### Apply Labeling Functions
Now lets apply all labeling functions on our reviews and check some statistics.
```
applier = PandasLFApplier(lfs=labeling_functions)
applied_lfs = applier.apply(df=imdb_dataset_raw)
applied_lfs
```
Now we have a matrix with all labeling functions applied. This matrix has the shape $(instances \times labeling functions)$
```
print("Shape of applied labeling functions: ", applied_lfs.shape)
print("Number of reviews", len(imdb_dataset_raw))
print("Number of labeling functions", len(labeling_functions))
```
### Analysis
Now we can analyse some basic stats about our labeling functions. The main figures are:
- Coverage: How many labeling functions match at all
- Overlaps: How many labeling functions overlap with each other (e.g. awesome and amazing)
- Conflicts: How many labeling functions overlap and have different labels (e.g. awesome and bad)
- Correct: Correct LFs
- Incorrect: Incorrect Lfs
```
LFAnalysis(L=applied_lfs, lfs=labeling_functions).lf_summary()
lf_analysis = LFAnalysis(L=applied_lfs, lfs=labeling_functions).lf_summary()
pd.DataFrame(lf_analysis.mean())
pd.DataFrame(lf_analysis.median())
```
Lets have a look at some examples that were labeled by a positive keyword. You can see, that the true label for some of them is negative.
```
# consider 50th keyword
all_keywords.iloc[1110]
# sample 2 random examples where the 50th LF assigned a positive label
imdb_dataset_raw.iloc[applied_lfs[:, 1110] == POSITIVE, :].sample(2, random_state=1).loc[:, [COLUMN_WITH_TEXT,'sentiment']]
```
## Transform rule matches
To work with knodle the dataset needs to be transformed into a binary array $Z$
(shape: `#instances x # rules`).
Where a cell $Z_{ij}$ means that for instance i
0 -> Rule j didn't match
1 -> Rule j matched
Furthermore, we need a matrix `mapping_rule_labels_t` which has a mapping of all rules to labels, stored in a binary manner as well
(shape `#rules x #labels`).
```
from knodle.transformation.rule_label_format import transform_snorkel_matrix_to_z_t
rule_matches_z, mapping_rules_labels_t = transform_snorkel_matrix_to_z_t(applied_lfs)
```
### Majority vote
Now we make a majority vote based on all rule matches. First we get the `rule_counts` by multiplying `rule_matches_z` with the `mapping_rules_labels_t`, then we divide it sumwise by the sum to get a probability value. In the end we counteract the divide with zero issue by setting all nan values to zero. All this happens in the `z_t_matrices_to_majority_vote_labels` function.
```
from knodle.transformation.majority import z_t_matrices_to_majority_vote_labels
# the ties are resolved randomly internally, so the predictions might slightly vary
pred_labels = z_t_matrices_to_majority_vote_labels(rule_matches_z, mapping_rules_labels_t)
# accuracy of the weak labels
acc = (pred_labels == imdb_dataset_raw.sentiment.map({'positive':POSITIVE, 'negative':NEGATIVE})).mean()
f"Accuracy of majority vote on weak labeling: {acc}"
```
## Make splits
```
from scipy.sparse import csr_matrix
from sklearn.model_selection import train_test_split
# we want to save the matrix a sparse format, so we convert once before it gets split
rule_matches_sparse = csr_matrix(rule_matches_z)
# adjust DataFrame format to Knodle standard
imdb_dataset_formatted = pd.DataFrame({
"sample": imdb_dataset_raw[COLUMN_WITH_TEXT].values,
"label": imdb_dataset_raw.sentiment.map({'positive':POSITIVE, 'negative':NEGATIVE}).values
})
# make splits for samples and weak annotation matrix
rest_df, test_df, rest_rule_matches_sparse_z, test_rule_matches_sparse_z = train_test_split(imdb_dataset_formatted, rule_matches_sparse, test_size=5000, random_state=42)
train_df, dev_df, train_rule_matches_sparse_z, dev_rule_matches_sparse_z = train_test_split(rest_df, rest_rule_matches_sparse_z, test_size=5000, random_state=42)
# drop labels for the train split
train_df.drop(columns=["label"], inplace=True)
train_df.head(1)
test_df.head(1)
```
## Save files
```
from joblib import dump
dump(train_rule_matches_sparse_z, os.path.join(data_path, "train_rule_matches_z.lib"))
dump(dev_rule_matches_sparse_z, os.path.join(data_path, "dev_rule_matches_z.lib"))
dump(test_rule_matches_sparse_z, os.path.join(data_path, "test_rule_matches_z.lib"))
dump(mapping_rules_labels_t, os.path.join(data_path, "mapping_rules_labels_t.lib"))
```
We also save the preprocessed texts with labels to use them later to evalutate a classifier (both as csv and binary).
```
train_df.to_csv(os.path.join(data_path, 'df_train.csv'), index=None)
dev_df.to_csv(os.path.join(data_path, 'df_dev.csv'), index=None)
test_df.to_csv(os.path.join(data_path, 'df_test.csv'), index=None)
dump(train_df, os.path.join(data_path, "df_train.lib"))
dump(dev_df, os.path.join(data_path, "df_dev.lib"))
dump(test_df, os.path.join(data_path, "df_test.lib"))
all_keywords.to_csv(os.path.join(data_path, 'keywords.csv'), index=None)
os.listdir(data_path)
```
# Finish
Now, we have created a weak supervision dataset. Of course it is not perfect but it is something with which we can compare performances of different denoising methods with. :-)
| github_jupyter |
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
```
%matplotlib inline
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
import csv
```
# Principal Component/EOF analysis
**GOAL:** Demonstrate the use of the SVD to calculate principal components or "Empirical Orthogonal Functions" in a geophysical data set. This example is modified from a paper by Chris Small (LDEO)
Small, C., 1994. A global analysis of mid-ocean ridge axial topography. Geophys J Int 116, 64–84. [doi:10.1111/j.1365-246X.1994.tb02128.x](https://academic.oup.com/gji/article/116/1/64/638843/A-global-analysis-of-mid-ocean-ridge-axial)
## The Data
Here we will consider a set of topography profiles taken across the global mid-ocean ridge system where the Earth's tectonic plates are spreading apart.
<table>
<tr align=center><td><img align=center src="./images/World_OceanFloor_topo_green_brown_1440x720.jpg"><td>
</table>
The data consists of 156 profiles from a range of spreading rates. Each profile contains 80 samples so is in effect a vector in $R^{80}$
```
# read the data from the csv file
data = np.genfromtxt('m80.csv', delimiter='')
data_mean = np.mean(data,0)
# and plot out a few profiles and the mean depth.
plt.figure()
rows = [ 9,59,99]
labels = [ 'slow','medium','fast']
for i,row in enumerate(rows):
plt.plot(data[row,:],label=labels[i])
plt.hold(True)
plt.plot(data_mean,'k--',label='mean')
plt.xlabel('Distance across axis (km)')
plt.ylabel('Relative Elevation (m)')
plt.legend(loc='best')
plt.title('Example cross-axis topography of mid-ocean ridges')
plt.show()
```
### EOF analysis
While each profile lives in an 80 dimensional space, we would like to see if we can classify the variability in fewer components. To begin we form a de-meaned data matrix $X$ where each row is a profile.
```
plt.figure()
X = data - data_mean
plt.imshow(X)
plt.xlabel('Distance across axis (Km)')
plt.ylabel('Relative Spreading Rate')
plt.colorbar()
plt.show()
```
### Applying the SVD
We now use the SVD to factor the data matrix as $X = U\Sigma V^T$
```
# now calculate the SVD of the de-meaned data matrix
U,S,Vt = la.svd(X,full_matrices=False)
```
And begin by looking at the spectrum of singular values $\Sigma$. Defining the variance as $\Sigma^2$ then we can also calculate the cumulative contribution to the total variance as
$$
g_k = \frac{\sum_{i=0}^k \sigma_i^2}{\sum_{i=0}^n \sigma_i^2}
$$
Plotting both $\Sigma$ and $g$ shows that $\sim$ 80% of the total variance can be explained by the first 4-5 Components
```
# plot the singular values
plt.figure()
plt.semilogy(S,'bo')
plt.grid()
plt.title('Singular Values')
plt.show()
# and cumulative percent of variance
g = np.cumsum(S*S)/np.sum(S*S)
plt.figure()
plt.plot(g,'bx-')
plt.title('% cumulative percent variance explained')
plt.grid()
plt.show()
```
Plotting the first 4 Singular Vectors in $V$, shows them to reflect some commonly occuring patterns in the data
```
plt.figure()
num_EOFs=3
for row in range(num_EOFs):
plt.plot(Vt[row,:],label='EOF{}'.format(row+1))
plt.grid()
plt.xlabel('Distance (km)')
plt.title('First {} EOFs '.format(num_EOFs))
plt.legend(loc='best')
plt.show()
```
For example, the first EOF pattern is primarily a symmetric pattern with an axial high surrounded by two off axis troughs (or an axial low with two flanking highs, the EOF's are just unit vector bases for the row-space and can be added with any positive or negative coefficient). The Second EOF is broader and all of one sign while the third EOF encodes assymetry.
### Reconstruction
Using the SVD we can also decompose each profile into a weighted linear combination of EOF's i.e.
$$
X = U\Sigma V^T = C V^T
$$
where $C = U\Sigma$ is a matrix of coefficients that describes the how each data row is decomposed into the relevant basis vectors. We can then produce a k-rank truncated representation of the data by
$$
X_k = C_k V_k^T
$$
where $C_k$ is the first $k$ columns of $C$ and $V_k$ is the first $k$ EOF's.
Here we show the original data and the reconstructed data using the first 5 EOF's
```
# recontruct the data using the first 5 EOF's
k=5
Ck = np.dot(U[:,:k],np.diag(S[:k]))
Vtk = Vt[:k,:]
data_k = data_mean + np.dot(Ck,Vtk)
plt.figure()
plt.imshow(data_k)
plt.colorbar()
plt.title('reconstructed data')
plt.show()
```
And we can consider a few reconstructed profiles compared with the original data
```
# show the original 3 profiles and their recontructed values using the first k EOF's
for i,row in enumerate(rows):
plt.figure()
plt.plot(data_k[row,:],label='k={}'.format(k))
plt.hold(True)
plt.plot(data[row,:],label='original data')
Cstring = [ '{:3.0f}, '.format(Ck[row,i]) for i in range(k) ]
plt.title('Reconstruction profile {}:\n C_{}='.format(row,k)+''.join(Cstring))
plt.legend(loc='best')
plt.show()
```
## projection of data onto a subspace
We can also use the Principal Components to look at the projection of the data onto a lower dimensional space as the coefficients $C$, are simply the coordinates of our data along each principal component. For example we can view the data in the 2-Dimensional space defined by the first 2 EOF's by simply plotting C_1 against C_2.
```
# plot the data in the plane defined by the first two principal components
plt.figure()
plt.scatter(Ck[:,0],Ck[:,1])
plt.xlabel('$V_1$')
plt.ylabel('$V_2$')
plt.grid()
plt.title('Projection onto the first two principal components')
plt.show()
# Or consider the degree of assymetry (EOF 3) as a function of spreading rate
plt.figure()
plt.plot(Ck[:,2],'bo')
plt.xlabel('Spreading rate')
plt.ylabel('$C_3$')
plt.grid()
plt.title('Degree of assymetry')
plt.show()
```
| github_jupyter |
## Coin Toss Game
This notebook simulates a coin tossing game where the player gets to pick a sequence of head/tail occurences as the endgame. A coin is tossed until the chosen sequence of head/tail occurs and the game ends. The objective is to find the sequence of head/tail occurences that will end the game with the least number of tosses. This is simulated by running a given number of coin toss games with the chosen endgame sequence. The average number of tosses required to win is computed and its distribution is plotted.
```
# Import useful packages
import sys
import numpy as np
from collections import deque
import ipywidgets as widgets
from IPython.display import display
from bokeh.io import show, output_notebook
from bokeh.plotting import figure
output_notebook()
# Defines a class for the coin toss game
class CoinToss:
# Initializing variables
# Arguments: number of games to simulate, sequence of head/tail to end the game
def __init__(self, number_of_games, endgame):
self.numgames = number_of_games
self.endgamelen = len(endgame)
self.endgame = endgame
self.reset()
self.check_endgame()
# Reset all the counters
def reset(self):
self.counter = np.zeros(self.numgames)
# Check if endgame is correctly inputted
def check_endgame(self):
if self.endgame.strip('HT'):
sys.exit('ERROR: Endgame can only be a string containing Hs and/or Ts')
# Run the coin toss game
def run(self):
# Initialize a queue for the current sequence
curr_seq = deque('', self.endgamelen)
endgame_reached = False
# Start looping through number of games
for i in range(self.numgames):
while not endgame_reached:
# Update counter for current game
self.counter[i] += 1
# Check if coin toss resulted in a Head or a Tail
if np.random.random_sample() < 0.5:
curr_seq.append('H')
else:
curr_seq.append('T')
# Check if the current sequence is equal to the endgame
check = sum(cs == eg for (cs, eg) in zip(curr_seq, self.endgame))
if check == self.endgamelen:
endgame_reached = True
curr_seq.clear()
endgame_reached = False
print('Average number of tosses to reach endgame = ', np.mean(self.counter))
# Plot the distribution of number of tosses required to end the game
def plot_counts(self):
hist, edges = np.histogram(self.counter, density=False, bins=50)
p = figure(plot_width=800, plot_height=300,
title='{} games with endgame {}'.format(self.numgames, self.endgame),
tools='', background_fill_color='#fafafa')
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="navy",
line_color="white", alpha=0.5)
p.y_range.start = 0
p.xaxis.axis_label = 'Number of tosses to reach endgame'
p.yaxis.axis_label = 'counts'
show(p)
# Start the coin tossing game simulation
def start_tossing(number_of_games=2000, endgame='HHH'):
c = CoinToss(number_of_games, endgame)
c.run()
c.plot_counts()
# # Interactive control for entering number of games
# style = {'description_width': 'initial'}
# number_of_games = widgets.IntSlider(description='Number of games', style=style,
# min=1, max=5000, step=1, value=200, continuous_update=False)
# # Interactive control for entering the endgame
# endgame=widgets.Text(value='HH', placeholder='Type endgame',
# description='Endgame:', disabled=False)
# # Creating the interactive controls
# widget_ui = widgets.HBox([number_of_games, endgame])
# widget_out = widgets.interactive_output(start_tossing,
# {'number_of_games': number_of_games, 'endgame': endgame})
# Display the controls and output
display(widget_ui, widget_out)
start_tossing(2000, endgame='HHH')
```
| github_jupyter |
_author_: **Jimit Dholakia**
```
# https://www.careerbuilder.com/browse
import requests
from bs4 import BeautifulSoup
import pandas as pd
from pprint import pprint
import numpy as np
from tqdm.auto import tqdm
import sys
import time
df = pd.DataFrame(columns=['Main Category', 'Sub Category', 'Job Title'])
url = 'https://www.careerbuilder.com/browse'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
main_categories = soup.find_all("div", class_="col col-mobile-full bloc")
start = time.time()
df = pd.DataFrame(columns=['Main Category', 'Sub Category', 'Sub Category Link', 'Job Title', 'Job Title Link', 'Salary', 'Salary Link'])
# output = {}
for element in tqdm(main_categories, desc='Main Category'):
main_category_name = element.find('h3')
sub_cat = element.find_all('li')
sub_cat_list = []
for sub_cat_name in tqdm(sub_cat, leave=False, desc='Sub Category'):
x = sub_cat_name.find('a')
sub_cat_link = 'https://www.careerbuilder.com' + x.get('href')
sub_cat_name = x.text
r1 = requests.get(sub_cat_link)
soup1 = BeautifulSoup(r1.content, 'html.parser')
titles = soup1.find_all("div", class_="col link-list col-mobile-full")
for title in tqdm(titles, leave=False, desc='Title'):
title_name = title.find('a')
title_link = 'https://www.careerbuilder.com' + title_name.get('href')
print_text = title_name.text + ' ' * (100 - len(title_name.text))
# print('Running for Title: ' + print_text , end='\r')
# sys.stdout.write("\rRunning for Title: " + title_name.text)
# sys.stdout.flush()
try:
r2 = requests.get(title_link)
soup2 = BeautifulSoup(r2.content, 'html.parser')
salary_info = soup2.find("div", class_='salary-information')
salary = salary_info.find('a').text
salary_link = 'https://www.careerbuilder.com' + salary_info.find('a').get('href')
try:
r3 = requests.get(salary_link)
soup3 = BeautifulSoup(r3.content, 'html.parser')
except:
pass
# Average National Salary
try:
avg_salary = soup3.find_all("div", class_="fl-l")[0].text
except:
avg_salary = np.nan
# Skills by Educational Level
edu_dict = {}
try:
edu_salaries = soup3.find_all("div", class_="educational-link")
for edu_salary in edu_salaries:
edu_dict[edu_salary.find('span', class_="small-font").text] = edu_salary.find('h3').text
except:
edu_dict = {}
# Skills
skills_list = []
try:
skills_list_soup = soup3.find_all('div', class_='block salary-page-skills')[0].find_all('span', class_='bubble-link dn-i')
for skill in skills_list_soup:
skills_list.append(skill.text)
except:
skills_list = []
# No. of candidates and no. of jobs
try:
cand_jobs = soup3.find_all('div', class_='block salary-jobs-box')
num_candidates = cand_jobs[0].find_all('div', class_='bloc')[0].find('h1').text
except:
num_candidates = np.nan
try:
num_jobs = cand_jobs[0].find_all('div', class_='bloc')[1].find('h1').text
except:
num_jobs = np.nan
except:
salary = np.nan
salary_link = ''
edu_dict = {}
avg_salary = np.nan
skills_list = []
num_candidates = np.nan
num_jobs = np.nan
df_row = {
'Main Category': main_category_name.text,
'Sub Category': sub_cat_name,
'Sub Category Link': sub_cat_link,
'Job Title': title_name.text,
'Job Title Link': title_link,
'Salary': salary,
'Salary Link': salary_link,
'Average Salary': avg_salary,
'Educational Levels': edu_dict,
'Skills': skills_list,
'No of Candidates': num_candidates,
'No of Jobs': num_jobs
}
df = df.append(df_row, ignore_index=True)
filename = 'Job Info_311021_' + str(main_categories.index(element)) + '.csv'
df.to_csv(filename, index=False)
end = time.time()
print('Time Taken:', end-start, 'seconds')
print(df.shape)
df.to_csv('Job Information.csv', index=False)
print('Complete! Yay!!')
print('Done!')
```
| github_jupyter |
```
import numpy as np
import cv2
import torch
from functions import show_tensor
```
### Select device
```
device = torch.device("cuda")
```
### Create default or your own generator and EMA generator
```
from generator import define_G
def make_generator():
gen = define_G(input_nc = 3, output_nc = 3, ngf = 64, netG = "global", norm = "instance", n_downsample_global = 3, n_blocks_global = 9, n_local_enhancers = 1, n_blocks_local = 3).to(device)
return gen
generator = make_generator()
generator_ema = make_generator()
# Initilalize generators with equal parameters
with torch.no_grad():
for (gp, ep) in zip(generator.parameters(), generator_ema.parameters()):
ep.data = gp.data.detach()
```
### Use the default or your own discriminator
```
from discriminator import define_D
discriminator = define_D(input_nc = 3 + 3, ndf = 64, n_layers_D = 3, num_D = 3, norm="instance", getIntermFeat=True).to(device)
import losses
criterionGAN = losses.GANLoss(use_lsgan=True).to(device)
criterionFeat = torch.nn.L1Loss().to(device)
criterionVGG = losses.VGGLoss().to(device)
from replay_pool import ReplayPool
replay_pool = ReplayPool(10)
G_optim = torch.optim.AdamW(generator.parameters(), lr=1e-4)
D_optim = torch.optim.AdamW(discriminator.parameters(), lr=1e-4)
```
### Modify the dataset class for your requirements
```
from torchvision import transforms
from random import uniform, randint
from glob import glob
import os
class Dataset(torch.utils.data.Dataset):
def __init__(self, images_dir):
self.to_tensor = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
self.imagesDir = images_dir
self.images = glob(images_dir + "*.jpg")
def __getitem__(self, idx):
from random import random, randint, uniform
f_name = self.images[idx]
pair = cv2.imread(f_name, 1)
pair = cv2.cvtColor(pair, cv2.COLOR_BGR2RGB)
mid = pair.shape[1]//2
dst = pair[:, :mid]
src = pair[:, mid:]
w = 256
h = 256
if random() < 0.5:
src = np.fliplr(src)
dst = np.fliplr(dst)
src_tensor = self.to_tensor(src.copy())
dst_tensor = self.to_tensor(dst.copy())
return src_tensor, dst_tensor
def __len__(self):
return len(self.images)
batch_size = 8
train_dataset = Dataset("./data/facades/train/")
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size, num_workers=4, shuffle=True)
test_dataset = Dataset("./data/facades/val/")
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size, num_workers=4, shuffle=True)
data, targets = next(iter(train_loader))
show_tensor(data[0])
show_tensor(targets[0])
checkpoint_dir = "./checkpoints/facades/"
images_output_dir = os.path.join(checkpoint_dir, "images")
if not os.path.exists(images_output_dir):
os.makedirs(images_output_dir)
import os
def test(epoch, iteration):
generator.eval()
with torch.no_grad():
data, target = next(iter(test_loader))
data = data.to(device)
out = generator_ema(data)
matrix = []
pairs = torch.cat([data, out, target.to(device)], -1)
for idx in range(data.shape[0]):
img = 255*(pairs[idx] + 1)/2
img = img.cpu().permute(1, 2, 0).clip(0, 255).numpy().astype(np.uint8)
matrix.append(img)
matrix = np.vstack(matrix)
matrix = cv2.cvtColor(matrix, cv2.COLOR_RGB2BGR)
out_file = os.path.join(images_output_dir, f"{epoch}_{iteration}.jpg")
cv2.imwrite(out_file, matrix)
from moving_average import moving_average
from tqdm import tqdm
def process_loss(log, losses):
loss = 0
for k in losses:
if k not in log:
log[k] = 0
log[k] += losses[k].item()
loss = loss + losses[k]
return loss
def calc_G_losses(data, target):
fake = generator(data)
loss_vgg = 1*criterionVGG(fake, target)
pred_fake = discriminator(torch.cat([data, fake], axis=1))
loss_adv = 1*criterionGAN(pred_fake, 1)
with torch.no_grad():
pred_true = discriminator(torch.cat([data, target], axis=1))
loss_adv_feat = 0
adv_feats_count = 0
for d_fake_out, d_true_out in zip(pred_fake, pred_true):
for l_fake, l_true in zip(d_fake_out[: -1], d_true_out[: -1]):
loss_adv_feat = loss_adv_feat + criterionFeat(l_fake, l_true)
adv_feats_count += 1
loss_adv_feat = 1*(4/adv_feats_count)*loss_adv_feat
return {"G_vgg": loss_vgg, "G_adv": loss_adv, "G_adv_feat": loss_adv_feat}
def calc_D_losses(data, target):
with torch.no_grad():
gen_out = generator(data)
fake = replay_pool.query({"input": data.detach(), "output": gen_out.detach()})
pred_true = discriminator(torch.cat([data, target], axis=1))
loss_true = criterionGAN(pred_true, 1)
pred_fake = discriminator(torch.cat([fake["input"], fake["output"]], axis=1))
loss_false = criterionGAN(pred_fake, 0)
return {"D_true": loss_true, "D_false": loss_false}
def train(epoch):
print(f"Training epoch {epoch}...")
generator.train()
discriminator.train()
N = 0
log = {}
pbar = tqdm(train_loader)
for data, target in pbar:
with torch.no_grad():
data = data.to(device)
target = target.to(device)
G_optim.zero_grad()
generator.requires_grad_(True)
discriminator.requires_grad_(False)
G_losses = calc_G_losses(data, target)
G_loss = process_loss(log, G_losses)
G_loss.backward()
del G_losses
G_optim.step()
moving_average(generator, generator_ema)
D_optim.zero_grad()
generator.requires_grad_(False)
discriminator.requires_grad_(True)
D_losses = calc_D_losses(data, target)
D_loss = process_loss(log, D_losses)
D_loss.backward()
del D_losses
D_optim.step()
txt = ""
N += 1
if (N%100 == 0) or (N + 1 >= len(train_loader)):
for i in range(3):
test(epoch, N + i)
for k in log:
txt += f"{k}: {log[k]/N:.3e} "
pbar.set_description(txt)
if (N%1000 == 0) or (N + 1 >= len(train_loader)):
import datetime
out_file = f"epoch_{epoch}_{datetime.datetime.now().strftime('%Y-%m-%d %H:%M')}.pt"
out_file = os.path.join(checkpoint_dir, out_file)
torch.save({"G": generator_ema.state_dict(), "D": discriminator.state_dict()}, out_file)
print(f"Saved to {out_file}")
```
### Use the next lines for restoring from the checkpoint
```
test(0, 0)
```
### Start
```
for epoch in range(0, 1000):
train(epoch)
```
| github_jupyter |
# Classify Genres and Emotions in Songs Using Deep Learning
## Description:
The goal of this lab is to recognize the genre and extract the emotions from spectrograms of music songs. We are given 2 datasets:
- Free Music Archive (FMA) genre that contains 3834 samples from 20 music genres.
- Multitask music dataset that contains 1497 samples with labels about the emotions such as valence, energy and danceability.
All samples came from spectrograms, that have been extracted from clips of 30 seconds from different songs. We will analyze the spectrograms using deep learning architectures such as Recurrent Neural Networks and Convolutional Neural Networks. The exercise is separated in 5 parts:
- Data analysis and familiarize with spectrograms.
- Implement classifiers about the music genre using the FMA dataset.
- Implement regression models for predicting valence, energy and danceability.
- Use of modern training techniques, such as transfer and multitask learning, to improve the previous results.
- Submit results in the Kaggle competition of the exercise (optional).
```
# Import necessary libraries
import numpy as np
import copy
import re
import os
import pandas as pd
import random
import matplotlib.pyplot as plt
# Sklearn
from sklearn.metrics import f1_score, accuracy_score, recall_score, classification_report
from sklearn.preprocessing import LabelEncoder
# Pytorch
import torch
from torch import nn
from torch import optim
from torch.utils.data import Dataset
from torch.utils.data import SubsetRandomSampler, DataLoader
import torch.nn.functional as F
from torchvision import transforms
import torchvision.models as models
# Scipy
from scipy.stats import spearmanr
# Other
from IPython.display import Image
```
## Data Loading
In this section, all the code for loading and manipulating the 2 datasets is available. Some parts are the same with the prepare_lab.
```
# Combine similar classes and remove underrepresented classes.
class_mapping = {
'Rock': 'Rock',
'Psych-Rock': 'Rock',
'Indie-Rock': None,
'Post-Rock': 'Rock',
'Psych-Folk': 'Folk',
'Folk': 'Folk',
'Metal': 'Metal',
'Punk': 'Metal',
'Post-Punk': None,
'Trip-Hop': 'Trip-Hop',
'Pop': 'Pop',
'Electronic': 'Electronic',
'Hip-Hop': 'Hip-Hop',
'Classical': 'Classical',
'Blues': 'Blues',
'Chiptune': 'Electronic',
'Jazz': 'Jazz',
'Soundtrack': None,
'International': None,
'Old-Time': None
}
# Define a function that splits a dataset in train and validation set.
def torch_train_val_split(dataset, batch_train, batch_eval, val_size=.2, shuffle=True, seed=None):
# Creating data indices for training and validation splits:
dataset_size = len(dataset)
indices = list(range(dataset_size))
val_split = int(np.floor(val_size * dataset_size))
if shuffle:
np.random.seed(seed)
np.random.shuffle(indices)
# Rearrange train and validation set
train_indices = indices[val_split:]
val_indices = indices[:val_split]
# Creating PT data samplers and loaders:
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = DataLoader(dataset,
batch_size=batch_train,
sampler=train_sampler)
val_loader = DataLoader(dataset,
batch_size=batch_eval,
sampler=val_sampler)
return train_loader, val_loader
# Define some useful functions for loading spectrograms and chromagrams
def read_fused_spectrogram(spectrogram_file):
spectrogram = np.load(spectrogram_file)
return spectrogram.T
def read_mel_spectrogram(spectrogram_file):
spectrogram = np.load(spectrogram_file)[:128]
return spectrogram.T
def read_chromagram(spectrogram_file):
spectrogram = np.load(spectrogram_file)[128:]
return spectrogram.T
# Define an encoder for the labels
class LabelTransformer(LabelEncoder):
def inverse(self, y):
try:
return super(LabelTransformer, self).inverse_transform(y)
except:
return super(LabelTransformer, self).inverse_transform([y])
def transform(self, y):
try:
return super(LabelTransformer, self).transform(y)
except:
return super(LabelTransformer, self).transform([y])
# Define a PaddingTransformer in order to convert all input sequences to the same length
class PaddingTransform(object):
def __init__(self, max_length, padding_value=0):
self.max_length = max_length
self.padding_value = padding_value
def __call__(self, s):
if len(s) == self.max_length:
return s
if len(s) > self.max_length:
return s[:self.max_length]
if len(s) < self.max_length:
s1 = copy.deepcopy(s)
pad = np.zeros((self.max_length - s.shape[0], s.shape[1]), dtype=np.float32)
s1 = np.vstack((s1, pad))
return s1
# Define useful parameters that are the same for all the models.
num_mel = 128
num_chroma = 12
n_classes = 10
```
- Define a Pytorch dataset for the 1st dataset.
```
class SpectrogramDataset(Dataset):
def __init__(self, path, class_mapping=None, train=True, max_length=-1, read_spec_fn=read_fused_spectrogram):
t = 'train' if train else 'test'
p = os.path.join(path, t)
self.index = os.path.join(path, "{}_labels.txt".format(t))
self.files, labels = self.get_files_labels(self.index, class_mapping)
self.feats = [read_spec_fn(os.path.join(p, f)) for f in self.files]
self.feat_dim = self.feats[0].shape[1]
self.lengths = [len(i) for i in self.feats]
self.max_length = max(self.lengths) if max_length <= 0 else max_length
self.zero_pad_and_stack = PaddingTransform(self.max_length)
self.label_transformer = LabelTransformer()
if isinstance(labels, (list, tuple)):
self.labels = np.array(self.label_transformer.fit_transform(labels)).astype('int64')
def get_files_labels(self, txt, class_mapping):
with open(txt, 'r') as fd:
lines = [l.rstrip().split('\t') for l in fd.readlines()[1:]]
files, labels = [], []
for l in lines:
label = l[1]
if class_mapping:
label = class_mapping[l[1]]
if not label:
continue
# Kaggle automatically unzips the npy.gz format so this hack is needed
_id = l[0].split('.')[0]
npy_file = '{}.fused.full.npy'.format(_id)
files.append(npy_file)
labels.append(label)
return files, labels
def __getitem__(self, item):
# Return a tuple in the form (padded_feats, label, length)
l = min(self.lengths[item], self.max_length)
return self.zero_pad_and_stack(self.feats[item]), self.labels[item], l
def __len__(self):
return len(self.labels)
```
- Load mel spectrograms.
```
mel_specs = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=True,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
train_loader_mel, val_loader_mel = torch_train_val_split(mel_specs, 32 ,32, val_size=.33)
print("Shape of a train example in SpectrogramDataset: ")
print(mel_specs[1][0].shape)
```
We should pad the test set, so as to have the same shape with the train dataset (in order to use them in the following CNN).
```
test_mel = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=False,
class_mapping=class_mapping,
max_length=1293,
read_spec_fn=read_mel_spectrogram)
test_loader_mel = DataLoader(test_mel, batch_size=32)
```
### Step 7: 2D CNN
Antother way of contructing a model for analyzing speech singnals is to see the spectrogram as an image and use a Convolutional Neural Network.
- [Here](https://cs.stanford.edu/people/karpathy/convnetjs/) we can train simple Convolutional Networks and check their internal functions by displaying the activations of each layer without programming cost. We will train a CNN in MNIST and comment on the functions.
```
Image("../input/images/convnetjs_1.png")
```
The dataset is fairly easy and the accuracy for the validation set is very high.
```
Image("../input/images/convnetjs_2.png")
Image("../input/images/convnetjs_3.png")
```
In each layer, each neuron learns a part of the corresponding digit.
```
Image("../input/images/convnetjs_4.png")
```
The only wrong prediction on the test set is in an image that even a person may predict wrong since it is not clearly written.
- Define a 2D CNN with 4 layers. Each layer implements the following operations:
- 2D convclution: The purpose of this step is to extract features from the input image. Convolution preserves the spatial relationship between pixels by learning image features using small squares of input data
- Batch normalization: Speeds up learning.
- ReLU activation: Using ReLU, we introduce non-linearity in our ConvNet, since most of the real-world data we would want our ConvNet to learn would be non-linear.
- Max pooling: Reduces the dimensionality of each feature map but retains the most important information.
```
# Define a function that trains the model for an epoch.
def train_dataset(_epoch, dataloader, model, loss_function, optimizer):
# IMPORTANT: switch to train mode
# Εnable regularization layers, such as Dropout
model.train()
running_loss = 0.0
# Οbtain the model's device ID
device = next(model.parameters()).device
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 1 - zero the gradients
# Remember that PyTorch accumulates gradients.
# We need to clear them out before each batch!
optimizer.zero_grad()
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss = loss_function(y_preds, labels)
# Step 4 - backward pass: compute gradient wrt model parameters
loss.backward()
# Step 5 - update weights
optimizer.step()
# Accumulate loss in a variable.
running_loss += loss.data.item()
return running_loss / index
# Define a function that evaluates the model in an epoch.
def eval_dataset(dataloader, model, loss_function):
# IMPORTANT: switch to eval mode
# Disable regularization layers, such as Dropout
model.eval()
running_loss = 0.0
y_pred = [] # the predicted labels
y = [] # the gold labels
# Obtain the model's device ID
device = next(model.parameters()).device
# IMPORTANT: in evaluation mode, we don't want to keep the gradients
# so we do everything under torch.no_grad()
with torch.no_grad():
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Step 1 - move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths) # EX9
# Step 3 - compute loss: L = loss_function(y, y')
# We compute the loss only for inspection (compare train/test loss)
# because we do not actually backpropagate in test time
loss = loss_function(y_preds, labels)
# Step 4 - make predictions (class = argmax of posteriors)
y_preds_arg = torch.argmax(y_preds, dim=1)
# Step 5 - collect the predictions, gold labels and batch loss
y_pred.append(y_preds_arg.cpu().numpy())
y.append(labels.cpu().numpy())
# Accumulate loss in a variable
running_loss += loss.data.item()
return running_loss / index, (y, y_pred)
# Define the CNN architecture
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 3, 5)
self.conv1_bn = nn.BatchNorm2d(3)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(3, 6, 5)
self.conv2_bn = nn.BatchNorm2d(6)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(6, 8, 3)
self.conv3_bn = nn.BatchNorm2d(8)
self.pool3 = nn.MaxPool2d(2, 2)
self.conv4 = nn.Conv2d(8, 12, 3)
self.conv4_bn = nn.BatchNorm2d(12)
self.pool4 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(4680, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x, lengths):
x = x.view(x.size(0), 1, x.size(1), x.size(2))
x = self.pool1(F.relu( self.conv1_bn(self.conv1(x))))
x = self.pool2(F.relu( self.conv2_bn(self.conv2(x))))
x = self.pool3(F.relu( self.conv3_bn(self.conv3(x))))
x = self.pool4(F.relu( self.conv4_bn(self.conv4(x))))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
```
Train the model in the mel spectrograms.
```
EPOCHS = 15
model = ConvNet()
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
n_epochs_stop = 4
min_val_loss = 1000
epochs_no_improve = 0
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_mel, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_mel, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_mel, model, loss_function)
if val_loss < min_val_loss:
# Save the model
torch.save(model, "./mel_cnn")
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
if epochs_no_improve == n_epochs_stop:
print('Early stopping!')
break
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
# Save the model for future evaluation
torch.save(model, './mel_cnn')
# Load the model
model = torch.load('./mel_cnn')
model.eval()
```
Evaluate the cnn model in the test set
```
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_mel, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Test loss: %f" %test_loss)
print("Accuracy for test:" , accuracy_score(y_test_true, y_test_pred))
print()
```
We observe that the CNN architecture increased the accuracy of our model.
## Step 8: Sentiment prediction with regression
Now, we will use the multitask dataset.
```
# Define a Pytorch dataset for the Multitask dataset
class MultitaskDataset(Dataset):
def __init__(self, path, max_length=-1, read_spec_fn=read_fused_spectrogram, label_type='energy'):
p = os.path.join(path, 'train')
self.label_type = label_type
self.index = os.path.join(path, "train_labels.txt")
self.files, labels = self.get_files_labels(self.index)
self.feats = [read_spec_fn(os.path.join(p, f)) for f in self.files]
self.feat_dim = self.feats[0].shape[1]
self.lengths = [len(i) for i in self.feats]
self.max_length = max(self.lengths) if max_length <= 0 else max_length
self.zero_pad_and_stack = PaddingTransform(self.max_length)
if isinstance(labels, (list, tuple)):
self.labels = np.array(labels)
def get_files_labels(self, txt):
with open(txt, 'r') as fd:
lines = [l.split(',') for l in fd.readlines()[1:]]
files, labels = [], []
for l in lines:
if self.label_type == 'valence':
labels.append(float(l[1]))
elif self.label_type == 'energy':
labels.append(float(l[2]))
elif self.label_type == 'danceability':
labels.append(float(l[3].strip("\n")))
else:
labels.append([float(l[1]), float(l[2]), float(l[3].strip("\n"))])
# Kaggle automatically unzips the npy.gz format so this hack is needed
_id = l[0]
npy_file = '{}.fused.full.npy'.format(_id)
files.append(npy_file)
return files, labels
def __getitem__(self, item):
# Return a tuple in the form (padded_feats, valence, energy, danceability, length)
l = min(self.lengths[item], self.max_length)
return self.zero_pad_and_stack(self.feats[item]), self.labels[item], l
def __len__(self):
return len(self.labels)
```
- Load mulstitask dataset with certain padding in order to fit in the previous CNN.
```
specs_multi_valence = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset/',
max_length=1293,
label_type='valence',
read_spec_fn=read_mel_spectrogram)
train_loader_valence , val_loader_valence = torch_train_val_split(specs_multi_valence, 32 ,32, val_size=.33)
specs_multi_energy = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset/',
max_length=1293,
label_type='energy',
read_spec_fn=read_mel_spectrogram)
train_loader_energy, val_loader_energy = torch_train_val_split(specs_multi_energy, 32 ,32, val_size=.33)
specs_multi_danceability = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset/',
max_length=1293,
label_type='danceability',
read_spec_fn=read_mel_spectrogram)
train_loader_danceability, val_loader_danceability = torch_train_val_split(specs_multi_danceability, 32 ,32, val_size=.33)
print(specs_multi_valence[0][0].shape)
print(specs_multi_energy[0][0].shape)
print(specs_multi_danceability[0][0].shape)
```
- Load beat mulstitask dataset for the LSTM.
```
beat_specs_multi_valence = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset_beat/',
max_length=-1,
label_type='valence',
read_spec_fn=read_mel_spectrogram)
beat_train_loader_valence , beat_val_loader_valence = torch_train_val_split(beat_specs_multi_valence, 32 ,32, val_size=.33)
beat_specs_multi_energy = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset_beat/',
max_length=-1,
label_type='energy',
read_spec_fn=read_mel_spectrogram)
beat_train_loader_energy, beat_val_loader_energy = torch_train_val_split(beat_specs_multi_energy, 32 ,32, val_size=.33)
beat_specs_multi_danceability = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset_beat/',
max_length=-1,
label_type='danceability',
read_spec_fn=read_mel_spectrogram)
beat_train_loader_danceability, beat_val_loader_danceability = torch_train_val_split(beat_specs_multi_danceability, 32 ,32, val_size=.33)
print(beat_specs_multi_valence[0][0].shape)
print(beat_specs_multi_energy[0][0].shape)
print(beat_specs_multi_danceability[0][0].shape)
```
- Define the LSTM of Step 5 (prepare_lab) and train the beat mel multitask dataset.
```
class BasicLSTM(nn.Module):
def __init__(self, input_dim, rnn_size, output_dim, num_layers, bidirectional=False, dropout=0):
super(BasicLSTM, self).__init__()
self.bidirectional = bidirectional
self.rnn_size = rnn_size
self.feature_size = rnn_size * 2 if self.bidirectional else rnn_size
self.num_layers = num_layers
self.dropout = dropout
# Initialize the LSTM, Dropout, Output layers
self.lstm = nn.LSTM(input_dim, self.rnn_size, self.num_layers, bidirectional=self.bidirectional, batch_first=True, dropout=self.dropout)
self.linear = nn.Linear(self.feature_size, output_dim)
def forward(self, x, lengths):
"""
x : 3D numpy array of dimension N x L x D
N: batch index
L: sequence index
D: feature index
lengths: N x 1
"""
# Obtain the model's device ID
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# You must have all of the outputs of the LSTM, but you need only the last one (that does not exceed the sequence length)
# To get it use the last_timestep method
# Then pass it through the remaining network
if self.bidirectional:
h0 = torch.zeros(self.num_layers*2, x.size(0), self.rnn_size).double().to(DEVICE)
c0 = torch.zeros(self.num_layers*2, x.size(0), self.rnn_size).double().to(DEVICE)
else:
h0 = torch.zeros(self.num_layers, x.size(0), self.rnn_size).double().to(DEVICE)
c0 = torch.zeros(self.num_layers, x.size(0), self.rnn_size).double().to(DEVICE)
# Forward propagate LSTM
lstm_out, _ = self.lstm(x, (h0, c0))
# Forward propagate Linear
last_outputs = self.linear(self.last_timestep(lstm_out, lengths, self.bidirectional))
return last_outputs.view(-1)
def last_timestep(self, outputs, lengths, bidirectional=False):
"""
Returns the last output of the LSTM taking into account the zero padding
"""
if bidirectional:
forward, backward = self.split_directions(outputs)
last_forward = self.last_by_index(forward, lengths)
last_backward = backward[:, 0, :]
# Concatenate and return - maybe add more functionalities like average
return torch.cat((last_forward, last_backward), dim=-1)
else:
return self.last_by_index(outputs, lengths)
@staticmethod
def split_directions(outputs):
direction_size = int(outputs.size(-1) / 2)
forward = outputs[:, :, :direction_size]
backward = outputs[:, :, direction_size:]
return forward, backward
@staticmethod
def last_by_index(outputs, lengths):
# Obtain the model's device ID
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Index of the last output for each sequence.
idx = (lengths - 1).view(-1, 1).expand(outputs.size(0),
outputs.size(2)).unsqueeze(1).to(DEVICE)
return outputs.gather(1, idx).squeeze()
# Define train function for regression.
def train_dataset_regression(_epoch, dataloader, model, loss_function, optimizer):
# IMPORTANT: switch to train mode
# Εnable regularization layers, such as Dropout
model.train()
running_loss = 0.0
# Οbtain the model's device ID
device = next(model.parameters()).device
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 1 - zero the gradients
# Remember that PyTorch accumulates gradients.
# We need to clear them out before each batch!
optimizer.zero_grad()
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss = loss_function(y_preds, labels.double())
# Step 4 - backward pass: compute gradient wrt model parameters
loss.backward()
# Step 5 - update weights
optimizer.step()
# Accumulate loss in a variable.
running_loss += loss.data.item()
return running_loss / index
# Define evaluation function for regression.
def eval_dataset_regression(dataloader, model, loss_function):
# IMPORTANT: switch to eval mode
# Disable regularization layers, such as Dropout
model.eval()
running_loss = 0.0
y_pred = [] # the predicted labels
y = [] # the gold labels
# Obtain the model's device ID
device = next(model.parameters()).device
# IMPORTANT: in evaluation mode, we don't want to keep the gradients
# so we do everything under torch.no_grad()
with torch.no_grad():
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Step 1 - move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths) # EX9
# Step 3 - compute loss: L = loss_function(y, y')
# We compute the loss only for inspection (compare train/test loss)
# because we do not actually backpropagate in test time
loss = loss_function(y_preds, labels.double())
# Step 5 - collect the predictions, gold labels and batch loss
y_pred.append(y_preds.cpu().numpy())
y.append(labels.cpu().numpy())
# Accumulate loss in a variable
running_loss += loss.data.item()
return running_loss / index, (y, y_pred)
```
In order to train the 2nd dataset in the models that we have defined, we should change the loss function for regression. We will use mean squared error.
- Training for valence in LSTM.
```
RNN_SIZE = 32
EPOCHS = 20
model = BasicLSTM(num_mel, RNN_SIZE, 1, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, beat_train_loader_valence, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, _ = eval_dataset_regression(beat_train_loader_valence, model, loss_function)
val_loss, _ = eval_dataset_regression(beat_val_loader_valence, model, loss_function)
if epoch%(5) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print()
torch.save(model, './multitask_lstm_valence')
model = torch.load('./multitask_lstm_valence')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(beat_val_loader_valence, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
- Training for energy in LSTM.
```
RNN_SIZE = 32
EPOCHS = 20
model = BasicLSTM(num_mel, RNN_SIZE, 1, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, beat_train_loader_energy, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, _ = eval_dataset_regression(beat_train_loader_energy, model, loss_function)
val_loss, _ = eval_dataset_regression(beat_val_loader_energy, model, loss_function)
if epoch%(5) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print()
torch.save(model, './multitask_lstm_energy')
model = torch.load('./multitask_lstm_energy')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(beat_val_loader_energy, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
- Training for danceability in LSTM.
```
RNN_SIZE = 32
EPOCHS = 20
model = BasicLSTM(num_mel, RNN_SIZE, 1, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, beat_train_loader_danceability, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, _ = eval_dataset_regression(beat_train_loader_danceability, model, loss_function)
val_loss, _ = eval_dataset_regression(beat_val_loader_danceability, model, loss_function)
if epoch%(5) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print()
torch.save(model, './multitask_lstm_danceability')
model = torch.load('./multitask_lstm_danceability')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(beat_val_loader_danceability, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
- Define the CNN of Step 7 and train the mel multitask dataset.
```
# Define the CNN architecture
class ConvNetMultitask(nn.Module):
def __init__(self):
super(ConvNetMultitask, self).__init__()
self.conv1 = nn.Conv2d(1, 3, 5)
self.conv1_bn = nn.BatchNorm2d(3)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(3, 6, 5)
self.conv2_bn = nn.BatchNorm2d(6)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(6, 8, 3)
self.conv3_bn = nn.BatchNorm2d(8)
self.pool3 = nn.MaxPool2d(2, 2)
self.conv4 = nn.Conv2d(8, 12, 3)
self.conv4_bn = nn.BatchNorm2d(12)
self.pool4 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(4680, 128)
self.fc2 = nn.Linear(128, 1)
def forward(self, x, lengths):
x = x.view(x.size(0), 1, x.size(1), x.size(2))
x = self.pool1(F.relu( self.conv1_bn(self.conv1(x))))
x = self.pool2(F.relu( self.conv2_bn(self.conv2(x))))
x = self.pool3(F.relu( self.conv3_bn(self.conv3(x))))
x = self.pool4(F.relu( self.conv4_bn(self.conv4(x))))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
```
- Training for valence in CNN.
```
EPOCHS = 20
model = ConvNetMultitask()
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_valence, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_valence, model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_valence, model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(model, './multitask_cnn_valence')
model = torch.load('./multitask_cnn_valence')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_valence, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
- Training for energy in CNN.
```
EPOCHS = 20
model = ConvNetMultitask()
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_energy, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_energy, model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_energy, model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(model, './multitask_cnn_energy')
model = torch.load('./multitask_cnn_energy')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_energy, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
- Training for danceability in CNN.
```
EPOCHS = 20
model = ConvNetMultitask()
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_danceability, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_danceability, model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_danceability, model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(model, './multitask_cnn_danceability')
model = torch.load('./multitask_cnn_danceability')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_danceability, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
## Step 9a: Transfer Learning
When we have little available data, we can increase the performance of our deep neural networks using transfer learning from another model, trained on a bigger dataset.
- We choose the CNN architecture of step 7. The idea of Transfer Learning came in to picture when researchers realized that the first few layers of a CNN are learning low-level features like edges and corners. So, there is no point learning the same thing again while training on a similar data. But, we don’t have a clear-cut intuition on what is learned at different layers in a LSTM or GRU, since it is a time series model, making the whole process very complex.
- We load the model that is already trained in fma_genre_spectrograms.
```
transfer_model = torch.load('./mel_cnn')
print(transfer_model)
# We freeze the parameters
for param in transfer_model.parameters():
param.requires_grad = False
# We change only the last layer to fit in our new regression problem.
transfer_model.fc2 = nn.Linear(128, 1)
print(transfer_model)
```
Now, we will train only the last layer of our freeze model in the regression problem.
```
EPOCHS = 20
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
transfer_model.double()
transfer_model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(transfer_model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_valence, transfer_model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_valence, transfer_model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_valence, transfer_model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(transfer_model, './multitask_transfer_valence')
transfer_model = torch.load('./multitask_transfer_valence')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_valence, transfer_model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
transfer_model = torch.load('./mel_cnn')
# We freeze the parameters
for param in transfer_model.parameters():
param.requires_grad = False
# We change only the last layer to fit in our new regression problem.
transfer_model.fc2 = nn.Linear(128, 1)
EPOCHS = 20
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
transfer_model.double()
transfer_model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(transfer_model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_energy, transfer_model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_energy, transfer_model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_energy, transfer_model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(transfer_model, './multitask_transfer_energy')
transfer_model = torch.load('./multitask_transfer_energy')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_energy, transfer_model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
transfer_model = torch.load('./mel_cnn')
# We freeze the parameters
for param in transfer_model.parameters():
param.requires_grad = False
# We change only the last layer to fit in our new regression problem.
transfer_model.fc2 = nn.Linear(128, 1)
EPOCHS = 20
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
transfer_model.double()
transfer_model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(transfer_model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression(epoch, train_loader_danceability, transfer_model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression(train_loader_danceability, transfer_model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression(val_loader_danceability, transfer_model, loss_function)
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train: %f" %spearmanr(y_train_true, y_train_pred)[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation: %f" %spearmanr(y_val_true, y_val_pred)[0])
print()
torch.save(transfer_model, './multitask_transfer_danceability')
transfer_model = torch.load('./multitask_transfer_danceability')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression(val_loader_danceability, transfer_model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print("Spearman: %f" %spearmanr(y_test_true, y_test_pred)[0])
print()
```
## Step 9b: Multitask Learning
In step 8, we trained a separate model for each emotion. Another way of training more efficient models when we have many labels is to use multitask learning.
Load dataset that contains all three labels
```
specs_multi_all = MultitaskDataset(
'../input/patreco3-multitask-affective-music/data/multitask_dataset/',
max_length=1293,
label_type=-1,
read_spec_fn=read_mel_spectrogram)
train_loader_all, val_loader_all = torch_train_val_split(specs_multi_all, 32 ,32, val_size=.33)
print("Shape of an example: ")
print(specs_multi_all[0][0].shape)
print("Shape of label: ")
print(specs_multi_all[0][1].shape)
# Define trainning function for regression in multitask learning.
def train_dataset_regression_multi(_epoch, dataloader, model, loss_function, optimizer):
# IMPORTANT: switch to train mode
# Εnable regularization layers, such as Dropout
model.train()
running_loss = 0.0
# Οbtain the model's device ID
device = next(model.parameters()).device
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 1 - zero the gradients
# Remember that PyTorch accumulates gradients.
# We need to clear them out before each batch!
optimizer.zero_grad()
# Step 2 - forward pass: y' = model(x)
y_preds_val, y_preds_energy, y_preds_dance = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss_1 = loss_function(y_preds_val, labels[:, 0].double())
loss_2 = loss_function(y_preds_energy, labels[:, 1].double())
loss_3 = loss_function(y_preds_dance, labels[:, 2].double())
loss = loss_1 + loss_2 + loss_3
# Step 4 - backward pass: compute gradient wrt model parameters
loss.backward()
# Step 5 - update weights
optimizer.step()
# Accumulate loss in a variable.
running_loss += loss.data.item()
return running_loss / index
# Define evaluation function for regression multitask learning.
def eval_dataset_regression_multi(dataloader, model, loss_function):
# IMPORTANT: switch to eval mode
# Disable regularization layers, such as Dropout
model.eval()
running_loss = 0.0
y_pred = []
y = []
# Obtain the model's device ID
device = next(model.parameters()).device
# IMPORTANT: in evaluation mode, we don't want to keep the gradients
# so we do everything under torch.no_grad()
with torch.no_grad():
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Step 1 - move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 2 - forward pass: y' = model(x)
y_preds_val, y_preds_energy, y_preds_dance = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss_1 = loss_function(y_preds_val, labels[:, 0].double())
loss_2 = loss_function(y_preds_energy, labels[:, 1].double())
loss_3 = loss_function(y_preds_dance, labels[:, 2].double())
loss = loss_1 + loss_2 + loss_3
# Step 5 - collect the predictions, gold labels and batch loss
y_pred.append(np.hstack((y_preds_val.cpu().numpy(), y_preds_energy.cpu().numpy(), y_preds_dance.cpu().numpy())))
y.append(labels.cpu().numpy())
# Accumulate loss in a variable
running_loss += loss.data.item()
return running_loss / index, (y, y_pred)
class ConvNetMultitaskLearning(nn.Module):
def __init__(self):
super(ConvNetMultitaskLearning, self).__init__()
self.conv1 = nn.Conv2d(1, 3, 3)
self.conv1_bn = nn.BatchNorm2d(3)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(3, 6, 3)
self.conv2_bn = nn.BatchNorm2d(6)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(6, 8, 3)
self.conv3_bn = nn.BatchNorm2d(8)
self.pool3 = nn.MaxPool2d(2, 2)
self.conv4 = nn.Conv2d(8, 12, 3)
self.conv4_bn = nn.BatchNorm2d(12)
self.pool4 = nn.MaxPool2d(2, 2)
self.conv5 = nn.Conv2d(12, 16, 3)
self.conv5_bn = nn.BatchNorm2d(16)
self.pool5 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(1216, 32)
self.fc_val = nn.Linear(32, 1)
self.fc_energy = nn.Linear(32, 1)
self.fc_dance = nn.Linear(32, 1)
def forward(self, x, lengths):
x = x.view(x.size(0), 1, x.size(1), x.size(2))
x = self.pool1(F.relu( self.conv1_bn(self.conv1(x))))
x = self.pool2(F.relu( self.conv2_bn(self.conv2(x))))
x = self.pool3(F.relu( self.conv3_bn(self.conv3(x))))
x = self.pool4(F.relu( self.conv4_bn(self.conv4(x))))
x = self.pool5(F.relu( self.conv5_bn(self.conv5(x))))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
energy = self.fc_energy(x)
val = self.fc_val(x)
dance = self.fc_dance(x)
return val, energy, dance
EPOCHS = 50
model = ConvNetMultitaskLearning()
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.MSELoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
n_epochs_stop = 10
min_val_loss = 1000
epochs_no_improve = 0
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset_regression_multi(epoch, train_loader_all, model, loss_function, optimizer)
#train_dataset_regression_multi(epoch, val_loader_all, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train, y_train_pred) = eval_dataset_regression_multi(train_loader_all, model, loss_function)
val_loss, (y_val, y_val_pred) = eval_dataset_regression_multi(val_loader_all, model, loss_function)
if val_loss < min_val_loss:
# Save the model
torch.save(model, './multitask_learning')
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
if epochs_no_improve == n_epochs_stop:
print('Early stopping!')
break
if epoch%(1) == 0:
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
y_train_true = np.concatenate( y_train, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
print("Spearman in train - valence: %f" %spearmanr(y_train_true[:,0], y_train_pred[:,0])[0])
print("Spearman in train - energy: %f" %spearmanr(y_train_true[:,1], y_train_pred[:,1])[0])
print("Spearman in train - dance: %f" %spearmanr(y_train_true[:,2], y_train_pred[:,2])[0])
y_val_true = np.concatenate( y_val, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Spearman in validation - valence: %f" %spearmanr(y_val_true[:,0], y_val_pred[:,0])[0])
print("Spearman in validation - energy: %f" %spearmanr(y_val_true[:,1], y_val_pred[:,1])[0])
print("Spearman in validation - dance: %f" %spearmanr(y_val_true[:,2], y_val_pred[:,2])[0])
print()
model = torch.load('./multitask_learning')
test_loss, (y_test_gold, y_test_pred) = eval_dataset_regression_multi(val_loader_all, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(y_test_true.shape)
print("Spearman: %f" %spearmanr(y_test_true[:,0], y_test_pred[:,0])[0])
print("Spearman: %f" %spearmanr(y_test_true[:,1], y_test_pred[:,1])[0])
print("Spearman: %f" %spearmanr(y_test_true[:,2], y_test_pred[:,2])[0])
print()
```
- For kaggle submissions
```
# Define a Pytorch dataset for the test set
class MultitaskDatasetTest(Dataset):
def __init__(self, path, max_length=-1, read_spec_fn=read_fused_spectrogram, label_type='energy'):
p = os.path.join(path, 'test')
self.label_type = label_type
self.feats = []
self.files = []
for f in os.listdir(p):
self.feats.append(read_spec_fn(os.path.join(p, f)))
self.files.append(f.split('.')[0])
self.feat_dim = self.feats[0].shape[1]
self.lengths = [len(i) for i in self.feats]
self.max_length = max(self.lengths) if max_length <= 0 else max_length
self.zero_pad_and_stack = PaddingTransform(self.max_length)
def __getitem__(self, item):
# Return a tuple in the form (padded_feats, valence, energy, danceability, length)
l = min(self.lengths[item], self.max_length)
return self.zero_pad_and_stack(self.feats[item]), l, self.files[item]
def __len__(self):
return len(self.feats)
test_specs_multi_all = MultitaskDatasetTest(
'../input/patreco3-multitask-affective-music/data/multitask_dataset/',
max_length=1293,
read_spec_fn=read_mel_spectrogram)
test_loader_all = DataLoader(test_specs_multi_all, batch_size=32)
```
- Evaluate on the test set
```
model = torch.load('./multitask_learning')
y_pred = [] # the predicted labels
names = []
# Obtain the model's device ID
device = next(model.parameters()).device
with torch.no_grad():
for index, batch in enumerate(test_loader_all, 1):
# Get the inputs (batch)
inputs, lengths, files = batch
# Step 1 - move the batch tensors to the right device
inputs = inputs.to(device)
# Step 2 - forward pass: y' = model(x)
#y_preds = model(inputs, lengths) # EX9
# Step 3 - compute loss: L = loss_function(y, y')
# We compute the loss only for inspection (compare train/test loss)
# because we do not actually backpropagate in test time
y_preds_val, y_preds_energy, y_preds_dance = model(inputs, lengths)
# Step 5 - collect the predictions, gold labels and batch loss
y_pred.append(np.hstack((y_preds_val.cpu().numpy(), y_preds_energy.cpu().numpy(), y_preds_dance.cpu().numpy())))
# Step 5 - collect the predictions, gold labels and batch loss
#y_pred.append(y_preds.cpu().numpy())
names.append(files)
y_test_pred = np.concatenate( y_pred, axis=0 )
y_test_names = np.concatenate( names, axis=0 )
```
- Save the results in the kaggle format.
```
with open('./solution.txt', 'w') as f:
f.write('Id,valence,energy,danceability\n')
for i, name in enumerate(y_test_names):
f.write(name + ',' + str(y_test_pred[i,0]) + "," + str(y_test_pred[i,1]) + ',' + str(y_test_pred[i,2]) + '\n')
```
| github_jupyter |
# Continuous training with TFX and Cloud AI Platform
## Learning Objectives
1. Use the TFX CLI to build a TFX pipeline.
2. Deploy a TFX pipeline on the managed AI Platform service.
3. Create and monitor TFX pipeline runs using the TFX CLI and KFP UI.
In this lab, you use the [TFX CLI](https://www.tensorflow.org/tfx/guide/cli) utility to build and deploy a TFX pipeline that uses [**Kubeflow pipelines**](https://www.tensorflow.org/tfx/guide/kubeflow) for orchestration, **AI Platform** for model training, and a managed [AI Platform Pipeline instance (Kubeflow Pipelines)](https://www.tensorflow.org/tfx/guide/kubeflow) that runs on a Kubernetes cluster for compute. You will then create and monitor pipeline runs using the TFX CLI as well as the KFP UI.
### Setup
```
import yaml
# Set `PATH` to include the directory containing TFX CLI and skaffold.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
```
**Note**: this lab was built and tested with the following package versions:
`TFX version: 0.21.4`
`KFP version: 0.5.1`
(Optional) If running the above command results in different package versions or you receive an import error, upgrade to the correct versions by running the cell below:
```
%pip install --upgrade --user tfx==0.21.4
%pip install --upgrade --user kfp==0.5.1
```
Note: you may need to restart the kernel to pick up the correct package versions.
## Understanding the pipeline design
The pipeline source code can be found in the `pipeline` folder.
```
%cd pipeline
!ls -la
```
The `config.py` module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
The default values can be overwritten at compile time by providing the updated values in a set of environment variables.
The `pipeline.py` module contains the TFX DSL defining the workflow implemented by the pipeline.
The `preprocessing.py` module implements the data preprocessing logic the `Transform` component.
The `model.py` module implements the training logic for the `Train` component.
The `runner.py` module configures and executes `KubeflowDagRunner`. At compile time, the `KubeflowDagRunner.run()` method conversts the TFX DSL into the pipeline package in the [argo](https://argoproj.github.io/projects/argo) format.
The `features.py` module contains feature definitions common across `preprocessing.py` and `model.py`.
## Building and deploying the pipeline
You will use TFX CLI to compile and deploy the pipeline. As explained in the previous section, the environment specific settings can be provided through a set of environment variables and embedded into the pipeline package at compile time.
### Exercise: Create AI Platform Pipelines cluster
Navigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.
**1. Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform**. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "tfx" or "mlops". Note you may have already deployed an AI Pipelines instance during the Setup for the lab series. If so, you can proceed using that instance below in the next step.
Validate the deployment of your AI Platform Pipelines instance in the console before proceeding.
**2. Configure your environment settings**.
Update the below constants with the settings reflecting your lab environment.
- `GCP_REGION` - the compute region for AI Platform Training and Prediction
- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will contain the `kubeflowpipelines-` prefix.
```
# Use the following command to identify the GCS bucket for metadata and pipeline storage.
!gsutil ls
```
- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console. Open the *SETTINGS* for your instance and use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window. The format is `'....[region].pipelines.googleusercontent.com'`.
```
#TODO: Set your environment settings here for GCP_REGION, ENDPOINT, and ARTIFACT_STORE_URI.
GCP_REGION = ''
ENDPOINT = ''
ARTIFACT_STORE_URI = ''
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
```
### Compile the pipeline
You can build and upload the pipeline to the AI Platform Pipelines instance in one step, using the `tfx pipeline create` command. The `tfx pipeline create` goes through the following steps:
- (Optional) Builds the custom image to that provides a runtime environment for TFX components,
- Compiles the pipeline DSL into a pipeline package
- Uploads the pipeline package to the instance.
As you debug the pipeline DSL, you may prefer to first use the `tfx pipeline compile` command, which only executes the compilation step. After the DSL compiles successfully you can use `tfx pipeline create` to go through all steps.
#### Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.
Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
```
PIPELINE_NAME = 'tfx_covertype_continuous_training'
MODEL_NAME = 'tfx_covertype_classifier'
USE_KFP_SA=False
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.1'
PYTHON_VERSION = '3.7'
%env PROJECT_ID={PROJECT_ID}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env GCP_REGION={GCP_REGION}
%env MODEL_NAME={MODEL_NAME}
%env PIPELINE_NAME={PIPELINE_NAME}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
```
Note: you should see a `tfx_covertype_continuous_training.tar.gz` file appear in your current directory.
### Exercise: Deploy the pipeline package to AI Platform Pipelines
In this exercise, you will deploy your compiled pipeline code e.g. `gcr.io/[PROJECT_ID]/tfx_covertype_continuous_training` to run on AI Platform Pipelines with the TFX CLI.
*Hint: review the [TFX CLI documentation](https://www.tensorflow.org/tfx/guide/cli#create) on the "pipeline group" to create your pipeline. You will need to specify the `--pipeline_path` to point at the pipeline DSL defined locally in `runner.py`, `--endpoint`, and `--build_target_image` arguments using the environment variables specified above*.
```
# TODO: Your code here to use the TFX CLI to deploy your pipeline image to AI Platform Pipelines.
```
If you need to redeploy the pipeline you can first delete the previous version using `tfx pipeline delete` or you can update the pipeline in-place using `tfx pipeline update`.
To delete the pipeline:
`tfx pipeline delete --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}`
To update the pipeline:
`tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}`
### Exercise: Create and monitor pipeline runs
In this exercise, you will use triggered pipeline runs using the TFX CLI from this notebook and also using the KFP UI.
**1. Trigger a pipeline run using the TFX CLI**.
*Hint: review the [TFX CLI documentation](https://www.tensorflow.org/tfx/guide/cli#run_group) on the "run group".*
```
# TODO: Your code here to trigger a pipeline run with the TFX CLI
```
**2. Trigger a pipeline run from the KFP UI**.
On the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page, click `OPEN PIPELINES DASHBOARD`. A new tab will open. Select the `Pipelines` tab to the left, you see the `tfx_covertype_continuous_training` pipeline you deployed previously. Click on the pipeline name which will open up a window with a graphical display of your TFX pipeline. Next, click the `Create a run` button. Verify the `Pipeline name` and `Pipeline version` are pre-populated and optionally provide a `Run name` and `Experiment` to logically group the run metadata under before hitting `Start`.
*Note: each full pipeline run takes about 45 minutes to 1 hour.* Take the time to review the pipeline metadata artifacts created in the GCS storage bucket for each component including data splits, your Tensorflow SavedModel, model evaluation results, etc. as the pipeline executes. Also, when you trigger a pipeline run using the KFP UI, make sure to give your run a unique run name and even set a different Experiment so the job doesn't fail due to naming conflicts.
Additionally, to list all active runs of the pipeline, you can run:
```
!tfx run list --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}
```
To retrieve the status of a given run:
```
RUN_ID='[YOUR RUN ID]'
!tfx run status --pipeline_name {PIPELINE_NAME} --run_id {RUN_ID} --endpoint {ENDPOINT}
```
## Next Steps
In this lab, you learned how to manually build and deploy a TFX pipeline to AI Platform Pipelines and trigger pipeline runs from a notebook. In the next lab, you will construct a Cloud Build CI/CD workflow that automatically builds and deploys the same TFX covertype pipeline.
## License
<font size=-1>Licensed under the Apache License, Version 2.0 (the \"License\");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>
| github_jupyter |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Measures of Dispersion
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Mackenzie.
Dispersion measures how spread out a set of data is. This is especially important in finance because one of the main ways risk is measured is in how spread out returns have been historically. If returns have been very tight around a central value, then we have less reason to worry. If returns have been all over the place, that is risky.
Data with low dispersion is heavily clustered around the mean, while data with high dispersion indicates many very large and very small values.
Let's generate an array of random integers to work with.
```
# Import libraries
import numpy as np
np.random.seed(121)
# Generate 20 random integers < 100
X = np.random.randint(100, size=20)
# Sort them
X = np.sort(X)
print('X: %s' %(X))
mu = np.mean(X)
print('Mean of X:', mu)
```
# Range
Range is simply the difference between the maximum and minimum values in a dataset. Not surprisingly, it is very sensitive to outliers. We'll use `numpy`'s peak to peak (ptp) function for this.
```
print('Range of X: %s' %(np.ptp(X)))
```
# Mean Absolute Deviation (MAD)
The mean absolute deviation is the average of the distances of observations from the arithmetic mean. We use the absolute value of the deviation, so that 5 above the mean and 5 below the mean both contribute 5, because otherwise the deviations always sum to 0.
$$ MAD = \frac{\sum_{i=1}^n |X_i - \mu|}{n} $$
where $n$ is the number of observations and $\mu$ is their mean.
```
abs_dispersion = [np.abs(mu - x) for x in X]
MAD = np.sum(abs_dispersion)/len(abs_dispersion)
print('Mean absolute deviation of X:', MAD)
```
# Variance and standard deviation
The variance $\sigma^2$ is defined as the average of the squared deviations around the mean:
$$ \sigma^2 = \frac{\sum_{i=1}^n (X_i - \mu)^2}{n} $$
This is sometimes more convenient than the mean absolute deviation because absolute value is not differentiable, while squaring is smooth, and some optimization algorithms rely on differentiability.
Standard deviation is defined as the square root of the variance, $\sigma$, and it is the easier of the two to interpret because it is in the same units as the observations.
```
print('Variance of X:', np.var(X))
print('Standard deviation of X:', np.std(X))
```
One way to interpret standard deviation is by referring to Chebyshev's inequality. This tells us that the proportion of samples within $k$ standard deviations (that is, within a distance of $k \cdot$ standard deviation) of the mean is at least $1 - 1/k^2$ for all $k>1$.
Let's check that this is true for our data set.
```
k = 1.25
dist = k*np.std(X)
l = [x for x in X if abs(x - mu) <= dist]
print('Observations within', k, 'stds of mean:', l)
print('Confirming that', float(len(l))/len(X), '>', 1 - 1/k**2)
```
The bound given by Chebyshev's inequality seems fairly loose in this case. This bound is rarely strict, but it is useful because it holds for all data sets and distributions.
# Semivariance and semideviation
Although variance and standard deviation tell us how volatile a quantity is, they do not differentiate between deviations upward and deviations downward. Often, such as in the case of returns on an asset, we are more worried about deviations downward. This is addressed by semivariance and semideviation, which only count the observations that fall below the mean. Semivariance is defined as
$$ \frac{\sum_{X_i < \mu} (X_i - \mu)^2}{n_<} $$
where $n_<$ is the number of observations which are smaller than the mean. Semideviation is the square root of the semivariance.
```
# Because there is no built-in semideviation, we'll compute it ourselves
lows = [e for e in X if e <= mu]
semivar = np.sum( (lows - mu) ** 2 ) / len(lows)
print('Semivariance of X:', semivar)
print('Semideviation of X:', np.sqrt(semivar))
```
A related notion is target semivariance (and target semideviation), where we average the distance from a target of values which fall below that target:
$$ \frac{\sum_{X_i < B} (X_i - B)^2}{n_{<B}} $$
```
B = 19
lows_B = [e for e in X if e <= B]
semivar_B = sum(map(lambda x: (x - B)**2,lows_B))/len(lows_B)
print('Target semivariance of X:', semivar_B)
print('Target semideviation of X:', np.sqrt(semivar_B))
```
# These are Only Estimates
All of these computations will give you sample statistics, that is standard deviation of a sample of data. Whether or not this reflects the current true population standard deviation is not always obvious, and more effort has to be put into determining that. This is especially problematic in finance because all data are time series and the mean and variance may change over time. There are many different techniques and subtleties here, some of which are addressed in other lectures in this series.
In general do not assume that because something is true of your sample, it will remain true going forward.
## References
* "Quantitative Investment Analysis", by DeFusco, McLeavey, Pinto, and Runkle
---
**Next Lecture:** [Statistical Moments](Lecture08-Statistical-Moments.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
IBAN strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "NO9386011117947"
* `standard`: IBAN strings with proper whitespace in the proper places, like "NO93 8601 1117 947"
* `kontonr`: return the Norwegian bank account part of the number, like "86011117947".
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_no_iban()` and `validate_no_iban()`.
### An example dataset containing IBAN strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"iban": [
'NO9386011117947',
'NO92 8601 1117 947',
"999 999 999",
"004085616",
"002 724 334",
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_no_iban`
By default, `clean_no_iban` will clean iban strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_no_iban
clean_no_iban(df, column = "iban")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_no_iban(df, column = "iban", output_format="standard")
```
### `compact`
```
clean_no_iban(df, column = "iban", output_format="compact")
```
### `kontonr`
```
clean_no_iban(df, column = "iban", output_format="kontonr")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned IBAN strings is added with a title in the format `"{original title}_clean"`.
```
clean_no_iban(df, column="iban", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_no_iban(df, "iban", errors="coerce")
```
### `ignore`
```
clean_no_iban(df, "iban", errors="ignore")
```
## 4. `validate_no_iban()`
`validate_no_iban()` returns `True` when the input is a valid IBAN. Otherwise it returns `False`.
The input of `validate_no_iban()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_no_iban()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_no_iban()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_no_iban
print(validate_no_iban("NO9386011117947"))
print(validate_no_iban("NO92 8601 1117 947"))
print(validate_no_iban("999 999 999"))
print(validate_no_iban("51824753556"))
print(validate_no_iban("004085616"))
print(validate_no_iban("hello"))
print(validate_no_iban(np.nan))
print(validate_no_iban("NULL"))
```
### Series
```
validate_no_iban(df["iban"])
```
### DataFrame + Specify Column
```
validate_no_iban(df, column="iban")
```
### Only DataFrame
```
validate_no_iban(df)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Filter/filter_neq.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Filter/filter_neq.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Filter/filter_neq.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
# Select all states except California
selected = states.filter(ee.Filter.neq("NAME", 'California'))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
<div style = "font-family:Georgia;
font-size:2.5vw;
color:lightblue;
font-weight:normal;
text-align:center;
background:url('./text_images/Title Background.gif') no-repeat center; background-size:cover)">
<br>
<br>
Principal Component Analysis (PCA)
<br>
<br>
<br>
</div>
# Introduction
In the previous lessons you've learned the core idea behind **Principal Component Analysis (PCA)** and leanred about eigenvectors and eigenvalues. Before we apply PCA to Risk Factor Models, in this notebook, we will see how we can use PCA for **Dimensionality Reduction**. In short, dimensionality reduction is the process of reducing the number of variables used to explain your data.
We will start by giving a brief overview of dimensionality reduction, we will then use Scikit-Learn's implementation of PCA to reduce the dimension of random correlated data and visualize its principal components.
# Dimensionality Reduction
One of the main applications of Principal Component Analysis is to reduce the dimensionality of highly correlated data. For example, suppose your data looks like this:
<br>
<figure>
<img src = "./text_images/1.png" width = 80% style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align: center; font-style: italic">Fig 1. - Highly Correlated Data.</figcaption>
</figure>
<br>
We can see that this 2-Dimesnional data is described by two variables, $X$ and $Y$. However, notice that all the data points lie close to a straight line:
<br>
<figure>
<img src = "./text_images/2.png" width = 80% style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align: center; font-style: italic">Fig 2. - Direction of Biggest Variation.</figcaption>
</figure>
<br>
We can see that most of the variation in the data occurs along this particular purple line. This means, that we could explain most of the variation of the data by only looking at how the data is distributed along this particular line. Therefore, we could reduce the data from 2D to 1D data by projecting the data points onto this straight line:
<br>
<figure>
<img src = "./text_images/3.png" width = 80% style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align: center; font-style: italic">Fig 3. - Projected Points.</figcaption>
</figure>
<br>
This will reduce the number of variables needed to describe the data from 2 to 1 since you only need one number to specify a data point's position on a straight line. Therefore, the 2 variables that describe the 2D plot will be replaced by a new single variable that encodes the 1D linear relation.
<br>
<figure>
<img src = "./text_images/4.png" width = 80% style = "border: thin silver solid; padding: 10px">
<figcaption style = "text-align: center; font-style: italic">Fig 4. - Data Reduced to 1D.</figcaption>
</figure>
<br>
It is important to note, that this new variable and dimension don't need to have any particular meaning attached to them. For example, in the original 2D plot, $X$ and $Y$ may represent stock returns, however, when we perform dimensionality reduction, the new variables and dimensions don't need to have any such meaning attach to them. The new variables and dimensions are just abstract tools that allow us to express the data in a more compact form. While in some cases these new variables and dimensions may represent a real-world quantities, it is not necessary that they do.
Dimensionality reduction of correlated data works on any number of dimensions, *i.e.* you can use it to reduce $N$-Dimensional data to $k$-Dimensional data, where $k < N$. As mentioned earlier, PCA is one of the main tools used to perform such dimensionality reduction. To see how this is done, we will apply PCA to some random corelated data. In the next section, we create random data with a given amount of correlation.
# Create a Dataset
In this section we will learn how to create random correlated data. In the code below we will use the `utils.create_corr_data()` function from the `utils` module to create our random correlated data. The `utils` module was created specifically for this notebook and contains some useful functions. You are welcome to take a look it to see how the fucntions work. In the code below, you can choose the data range and the amount of correlation you want your data to have. These parameters are then passed to the `utils.create_corr_data()` function to create the data. Finally we use the `utils.plot_data()` function from the `utils` module to plot our data. You can explore how the data changes depending on the amount of correlation. Remember the correlation is a number ranging from -1 to 1. A correlation `0` indicates that there is no correlation between the data points, while a correlation of `1` and `-1` indicate the data points display full positive and negative correlation, respectively. This means, that for `corr = +/-1` all the points will lie in a straight line.
```
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [10.0, 6.0]
# Set data range
data_min = 10
data_max = 80
# Set the amount of correlation. The correlation is anumber in the closed interval [0,1].
corr = 0.8
# Create correlated data
X = utils.create_corr_data(corr, data_min, data_max)
# Plot the correlated data
utils.plot_data(X, corr)
```
# Mean Normalization
Before we can apply PCA to our data, it is important that we mean normalize the data. Mean normalization will evenly distribute our data in some small interval around zero. Consequently, the average of all the data points will be close to zero. The code below uses the `utils.mean_normalize_data()` function from the `utils` module to mean normalize the data we created above. We will again use the `utils.plot_data()` function from the `utils` module to plot our data. When we plot the mean normalized data, we will see that now the data is evenly distributed around the origin with coordinates`(0,0)`. This is expected because the average of all points should be zero.
```
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [10.0, 6.0]
# Mean normalize X
X_norm = utils.mean_normalize_data(X)
# Plot the mean normalized correlated data
utils.plot_data(X_norm, corr)
```
# PCA In Brief
Let's go back to the example we saw at the beginning. We had some 2D data that lies close to a straight line and we would like to reduce this data from 2D to 1D by projecting the data points onto a straight line. But how do we find the best straight line to project our data onto? In fact, how do we define the best straight line? We define the best line as the line such that the sum of the squares of the distances of the data points to their projected counterparts is minimized. It is important to note, that these projected distances are orthogonal to the straight line, not vertical as in linear regression. Also, we refer to the distances from the data points to their projected counterparts as *projection errors*. Now that we have defined what the best straight line should be, how do we find it? This is where PCA comes in. For this particular example, PCA will find a straight line on which to project the data such that the sum of squares of the projection errors is minimized. So we can use PCA to find the best straight line to project our data onto.
In general, for $N$-Dimensional data, PCA will find the lower dimensional surface on which to project the data so as to minimize the projection error. The lower dimensional surface is going to be determined by a set of vectors $v^{(1)}, v^{(2)}, ...v^{(k)}$, where $k$ is the dimension of the lower dimensional surface, with $k<N$. So for our example above, where we were reducing 2D data to 1D data, so $k=1$, and hence the lower dimensional surface, a straight line in this case, will be determined by only one vector $v^{(1)}$. This makes sense because you only need one vector to describe a straight line. Similarly in the case of reducing 3D data to 2D data, $k=2$, and hence we will have two vectors to determine a plane (a 2D surface) on which to project our data.
Therefore, what the PCA algorithm is really doing is finding the vectors that determine the lower dimensional surface that minimizes the projection error. As you learned previously, these vectors correspond to a subset of the eigenvectors of the data matrix $X$. We call this subset of eigenvectors the *Principal Components* of $X$. We also define the first principal component to be the eigenvector corresponding to the largest eigenvalue of $X$; the second principal component as the eigenvector corresponding to the second largest eigenvalue of $X$, and so on. If $v^{(1)}, v^{(2)}, ...v^{(N)}$ is the set of eigenvectors of $X$ then the principal components of $X$ will be determined by the subset $v^{(1)}, v^{(2)}, ...v^{(k)}$, for some chosen value of $k$, where $k<N$. Remember that $k$ determines the dimension of the lower dimensional surface were projecting our data to.
You can program the PCA algorithm by hand, but luckily many packages, such as Scikit-Learn, already contain built-in functions that perfrom the PCA algorithm for you. In the next section we will take a look at how we can implement the PCA algorithm using Scikit-Learn.
# PCA With Scikit-Learn
We can perform principal component analysis on our data using Scikit-Learn's [`PCA()` class](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html). Scikit-Learn's `PCA()` class uses a technique called **Singular Value Decomposition (SVD)** to compute the eigenvectors and eigenvalues of a given set of data. Given a matrix $X$ of shape $(M, N)$, the SVD algorithm consists of factorizing $X$ into 3 matrices $U, S$, and $V$ such that:
\begin{equation}
X = U S V
\end{equation}
The shape of the $U$ and $V$ matrices depends on the implementation of the SVD algorithm. When using Scikit-Learn's `PCA()` class, the $U$ and $V$ matrices have dimensions $(M, P)$ and $(P,N)$, respectevely, where $P = \min(M, N)$. The $V$ matrix contains the eigenvectors of $X$ as rows and the $S$ matrix is a diagonal $(P,P)$ matrix and contains the eigenvalues of $X$ arranged in decreasing order, *i.e.* the largest eigenvalue will be the element $S_{11}$, the second largest eigenvalue will be the element $S_{22}$, and so on. The eigenvectors in $V$ are arranged such that the first row of $V$ holds the eigenvector correspoding to the eigenvalue in $S_{11}$, the second row of $V$ will hold the eigenvector corresponding to eigenvalue in $S_{22}$, and so on.
Once the eigenvectors and eigenvalues have been calculated using SVD, the next step in dimensionality reduction using PCA is to choose the size of the dimension we are going to project our data onto. The size of this dimension is determined by $k$, which tells us the number of principal components we want to use. We can tell the `PCA()` class the number of principal components to return, by setting the parameter `n_components=k`, for some chosen value of $k$, like in the code below:
```
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
print('\nPCA Parameters:', pca, '\n')
```
As we can see `pca`contains the parameters we are going to use for our PCA algorithm. Since the random correlated data we created above is 2D, then in this case, $X$ will have a maximum of 2 eigenvectors. We have chosen $k=2$, so that the `PCA()` class will return both principal components (eigenvectors). We chose $k=2$ because we want to visulaize both principal components in the next section. If we didn't want to visulaize both pricncipal components, but rather perform dimensionality reduction directly, we would have chosen $k=1$, to reduce our 2D data to 1D, as we showed at the beginnig.
After we have set the paramters of our PCA algorithm, we now have to pass the data to the `PCA()` class. This is done via the `.fit()` method as shown in the code below:
```
pca.fit(X_norm);
```
Once the `PCA()` class fits the data through the `.fit()` method it returns an array containing the principal components in the attribute `.components_`, and the corresponding eigenvalues in a 1D array in the attribute `.singular_values_`. Other attributes of the `PCA()` class include `.explained_variance_ratio_` which gives the percentage of variance explained by each of the principal components. In the code below we access the above attributes and display their contents:
```
print('\nArray Containing all Principal Components:\n', pca.components_)
print('\nFirst Principal Component:', pca.components_[0])
print('Second Principal Component:', pca.components_[1])
print('\nEigenvalues:', pca.singular_values_)
print('\nPercentage of Variance Explained by Each Principal Component:', pca.explained_variance_ratio_)
```
We can see that the first principal component has a corresponding eigenvalue of around 42, while the second principal component has an eigenvalue of around 14. We can also see from the `.explained_variance_ratio_` attribute that the first principal component explains approximately 90% of the variance in the data, while the second principal components only explains around 10%. In general, the principal components with the largest eigenvalues, explain the mayority of the variance. In dimensionality reduction, it is custumary to project your data onto the principal components that explain the mayority of the variance. In this case for example, we would like to project our data onto the first principal component, since it exaplans 90% of the variance in the data. For more inforamtion on Scikit-Learn's `PCA()` class please see the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html).
# Visualizing The Principal Components
Now that we have our principal components, let's visualize them. In the code below we use the `utils.plot_data_with_pca_comp()` function from the `utils` module to calculate the principal components of our random correlated data. The function performs all the steps we have seen above and then plots the resulting principal components along with the data. In order to prevent you from scrolling up and down the notebook to create new random correlated data, I have copied the same code as before so you can change the parameters of the random data here as well.
```
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [10.0, 6.0]
# Set data range
data_min = 10
data_max = 80
# Set the amount of correlation
corr = 0.8
# Plot the data and the principal components
utils.plot_data_with_pca_comp(corr, data_min, data_max)
```
# Choosing The Number of Components
As we saw, the dimension of the lower dimensional surface, $k$, is a free parameter of the PCA algorithm. When working with low dimensional data, choosing the value of $k$ can be easy, for example, when working with 2D data, you can choose $k=1$, to reduce your 2D data to 1D. However, when working with high dimensional data, a suitable value for $k$ is not that clear. For example, suppose you had 1,000-dimensional data, what will be the best choice for $k$? Should we choose $k=500$ to reduce our data from 1,000D to 500D, or we could do better than that and reduce our data to 100D by choosing $k=100$.
Usually, the number of principal components, $k$, is chosen depending on how much of the variance of the original data you want to retain. Typically, you choose $k$ such that anywhere from 80% to 99% of the variance of the original data is retained, however you can choose a lower percentage if that is what you desire for your particular application. You check the percentage of the variance of your data that is explained for a given value of $k$ using the `.explained_variance_ratio_` attribute as we saw before. So, in practice what you will do, is to have the `PCA()`class return all the eigenvectors and then add up the elements in the array returned the `.explained_variance_ratio_` attribute until the desired retained variance is reached. For example, if we wanted to retain 98% of the variance in our data we choose $k$ such that the following condition is true:
\begin{equation}
\sum_{i=1}^k P_{i} \geq 0.98
\end{equation}
where $P$ is the array returned the `.explained_variance_ratio_` attribute. So, if you are choosing the value of $k$ manually, you can use the above formula to figure out what percentage of variance was retained for that particular value of $k$. For highly correlated data, it is possible to reduce the dimensions of the data significantly, even if we retain 99% of the variance.
# Projected Data
Now that we have seen what all the principal components look like, we will now use the `PCA()` class with `n_components = 1` so that we can perform dimensionality reduction. Once we find the vectors (principal component) that span our lower dimensional space, the next part of the dimensionality redcution algorithm is to find the projected values of our data onto that space. We can use the `.transform()` method from the `PCA()` class to apply dimensionalty reduction and project our data points onto the lower dimensional space. In our simple example, $k=1$, so we will only have a principal component to project our data onto.
In the code below, we apply PCA with `n_components = 1` and use the `.transform()` method to project our data points onto a straight line. Finally we plot our projected data.
```
import numpy as np
from sklearn.decomposition import PCA
pca = PCA(n_components = 1)
pca.fit(X_norm);
transformed_data = pca.transform(X_norm)
yvals = np.zeros(1000)
# Plot the data
plt.scatter(transformed_data, yvals, color = 'white', alpha = 0.5, linewidth = 0)
ax = plt.gca()
ax.set_facecolor('lightslategray')
plt.grid()
plt.show()
```
| github_jupyter |
# Notebook to Check a JupyterHub Software Environment
<img src= "https://github.com/waterhackweek/waterhackweek.github.io/blob/master/assets/images/waterhackweek2020logo-words.JPG?raw=true"
style="float:left;width:175px;padding:10px">
Github Source: [WHW2020_machinelearning tutorial on Github](https://github.com/waterhackweek/whw2020_machine_learning)<br />
Authors: [Andreas Müller](https://github.com/amueller), [Christina Bandaragoda](https://github.com/ChristinaB)<br />
<br />
<br />
## List of open source software requirements with specific versions for WHW2020 Landslide Machine Learning Tutorial
```
requirements = {'numpy': "1.6.1", 'scipy': "1.0", 'matplotlib': "2.0",
'IPython': "3.0", 'sklearn': "0.22.1", 'pandas': "0.18"}
```
## Software Imports and Functions
```
from distutils.version import LooseVersion as Version
import sys
OK = '\x1b[42m[ OK ]\x1b[0m'
FAIL = "\x1b[41m[FAIL]\x1b[0m"
try:
import importlib
except ImportError:
print(FAIL, "Python version 3.5 is required,"
" but %s is installed." % sys.version)
def import_version(pkg, min_ver, fail_msg=""):
mod = None
try:
mod = importlib.import_module(pkg)
ver = mod.__version__
if Version(ver) < min_ver:
print(FAIL, "%s version %s or higher required, but %s installed."
% (lib, min_ver, ver))
else:
print(OK, '%s version %s' % (pkg, ver))
except ImportError:
print(FAIL, '%s not installed. %s' % (pkg, fail_msg))
return mod
```
# First Check with Python version
```
print('Using python in', sys.prefix)
print(sys.version)
pyversion = Version(sys.version)
if pyversion < "3.5":
print(FAIL, "Python version 3.5 is required,"
" but %s is installed." % sys.version)
print()
# now the dependencies
for lib, required_version in list(requirements.items()):
import_version(lib, required_version)
```
## Install missing software
**Note: In this example, sklearn and matplotlib are missing in the CUAHSI JupyterHub COVID19 Kernel, and so installed below**
```
import sys
!{sys.executable} -m pip install sklearn
!{sys.executable} -m pip install matplotlib
```
## After installing missing libraries, run the version check for full list to ensure installation is OK.
```
for lib, required_version in list(requirements.items()):
import_version(lib, required_version)
```
# References
**Title: Waterhackweek Notebook to Check a JupyterHub Software Environment**
<img src= "https://github.com/waterhackweek/waterhackweek.github.io/blob/master/assets/images/waterhackweek2020logo-words.JPG?raw=true"
style="float:left;width:175px;padding:10px">
Source: [WHW2020_machinelearning tutorial on Github](https://github.com/waterhackweek/whw2020_machine_learning)<br />
Authors: Andreas Müller, Christina Bandaragoda<br />
[Waterhackweek OrcID: 0000-0001-7733-7419](https://orcid.org/0000-0001-7733-7419) <br />
NSF Award to [UW 1829585](https://nsf.gov/awardsearch/showAward?AWD_ID=1829585&HistoricalAwards=false) and [CUAHSI 1829744](https://nsf.gov/awardsearch/showAward?AWD_ID=1829744&HistoricalAwards=false) <br />
[Download Machine Learning Tutorial at Waterhackweek: Live unedited tutorial recorded 9/2/2020 [MP4]](https://www.hydroshare.org/resource/c59689b403b3484182b016fbcd0267ac/data/contents/wednesdayLectures2020/2020.9._Andreas.mp4)<br />
### Check out our [Intelligent Earth Zotero Library](https://www.zotero.org/groups/2526780/intelligent_earth/library) and Citation Wrangling Notebook [Open-Interoperability-References](https://github.com/waterhackweek/whw2020_machine_learning/blob/master/notebooks/Open-Interoperability-References.ipynb)
| github_jupyter |
# SLU06 - Python flow control
Welcome to the world of `Python Flow Control`!
Developing a thorough grasp of the concepts and techniques in this topic will provide you with a good programming foundation. We have included many exercises here, some are challenging. Take your time and practice. We are happy to assist, and if you don't have time to complete everything within this week, you can always return to it later.
## Start by importing these packages
```
# Just for evaluating the results.
import math
import json
import hashlib
```
## Part 1 - Conditionals & Boolean Algebra
Welcome to Planet X!
Every 7 years the playmakers get together to create Game X. Candidates from around the planet prepare to entertain the world. The candidates and their characteristics are shown below:
| Candidates | Age | Height (cm) | Weight (kg) | Strength |
| - |-: |-: |-: |-: |
|Squindle|213|1810|954|6|
|Flort|73|1225|769|8|
|Hambula|82|999|642|4|
|Kistapho|942|1829|842|6|
|Vilawi|636|819|531|5|
|Anhaular|231|1681|3762|9|
|Qintari|60|1732|522|4|
|Wendu|7|3|5|2|
|Baxla|65|875|464|4|
|Uluetto|251|1362|3647|2|
|Reomult|740|975|539|9|
### 1.1) Create Candidates Dictionary
Create a nested dictionary of the candidates, `candidate_dict`, using the candidate´s name as the top-level key to capturing all the above characteristics. The second level keys should be `"age"`, `"height"`, `"weight"` and `"strength"`.
For instance, accessing `candidate_dict["Squindle"]["age"]` should return `213`.
Please double check that the entries are correct as this dictionary will be used for the remaining exercises in this notebook.
Hint: below is an example of a nested dictionary. It is a dictionary of dictionaries.
nested_dict = { 'dict1': {'key_A': 'value_A'},
'dict2': {'key_B': 'value_B'}}
```
# Create a dictionary with the candidates characteristics above
# Assign the dictionary to variable candidate_dict.
# candidate_dict = ...
# YOUR CODE HERE
raise NotImplementedError()
Squindle_age_hash = 'd48ff4b2f68a10fd7c86f185a6ccede0dc0f2c48538d697cb33b6ada3f1e85db'
Flort_height_hash = '6ecf763ff6e7cef7b47e6611e1bf76fe2608a2e32a97b2d88b083ae1d8d02c82'
Wendu_weight_hash = 'ef2d127de37b942baad06145e54b0c619a1f22327b2ebbcfbec78f5564afe39d'
Reomult_strength_hash = '19581e27de7ced00ff1ce50b2047e7a567c76b1cbaebabe5ef03f7c3017bb5b7'
assert isinstance(candidate_dict, dict), "Review the data type of candidate_dict."
assert len(candidate_dict) == 11, "The size of the dictionary is incorrect"
assert len(candidate_dict["Squindle"]) == 4, "The dictionary is missing some candidate characteristics"
assert len(candidate_dict["Reomult"]) == 4, "The dictionary is missing some candidate characteristics"
assert set(candidate_dict.keys()) == set(['Squindle', 'Flort', 'Hambula', 'Kistapho', 'Vilawi', 'Anhaular', 'Qintari', 'Wendu', 'Baxla', 'Uluetto', 'Reomult']), "The dictionary has incorrect candidate names"
assert set(candidate_dict["Squindle"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Flort"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Hambula"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Kistapho"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Vilawi"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Anhaular"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Qintari"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Wendu"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Baxla"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Uluetto"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert set(candidate_dict["Reomult"].keys()) == set(['age', 'height', 'strength', 'weight']), "The dictionary has mismatched candidate characteristics"
assert isinstance(candidate_dict["Squindle"]["age"], int), "Review the data type of Squindle's age."
assert isinstance(candidate_dict["Flort"]["height"], int), "Review the data type of Flort's height."
assert isinstance(candidate_dict["Wendu"]["weight"], int), "Review the data type of Wendu's weight."
assert isinstance(candidate_dict["Reomult"]["strength"], int), "Review the data type of Reomult's strength."
assert Squindle_age_hash == hashlib.sha256(json.dumps(candidate_dict["Squindle"]["age"]).encode()).hexdigest(), "The value of Squindle's age is incorrect."
assert Flort_height_hash == hashlib.sha256(json.dumps(candidate_dict["Flort"]["height"]).encode()).hexdigest(), "The value of Flort's height is incorrect."
assert Wendu_weight_hash == hashlib.sha256(json.dumps(candidate_dict["Wendu"]["weight"]).encode()).hexdigest(), "The value of Wendu's weight is incorrect."
assert Reomult_strength_hash == hashlib.sha256(json.dumps(candidate_dict["Reomult"]["strength"]).encode()).hexdigest(), "The value of Reomult's strength is incorrect."
print("Your solution appears correct!")
```
### 1.2) Order of precedence
To compete in Game X, one must be able to program in Python!
In the pre-screening round, candidates are required to arrange a set of stones (A to G) in the correct sequence in order to unlock a door that holds their entrance ticket to the Game.
Using the following boolean expression, candidates are asked to place the stones in the sequence of evaluation in Python. If in doubt, refer to the `table of precedence` to identify the order in which the operations are executed. Each operator is identified with a letter from `A` to `G`.
The solution should be a **list** where the first element is the letter of the first operation to be performed, the second element is the second operation to be performed, and so on until the last operation to be performed.
The result should resemble something like `["A", "B", "D", "G", ...]`
The expression is:
```
#You don't need to execute this cell!
variable = 3 != 1+1 and (2**3 > 1 or False)
# A B C D E F G
```
The letters that identify the operations are:
- A: `=`
- B: `!=`
- C: `+`
- D: `and`
- E: `**`
- F: `>`
- G: `or`
```
# Create a list named operators_list that contains the identifiers of the operators sorted by the execution order.
# Make sure that you type the strings exactly.
# operators_list = ...
# YOUR CODE HERE
raise NotImplementedError()
operators_hash_0 = 'a9f51566bd6705f7ea6ad54bb9deb449f795582d6529a0e22207b8981233ec58'
operators_hash_1 = 'f67ab10ad4e4c53121b6a5fe4da9c10ddee905b978d3788d2723d7bfacbe28a9'
operators_hash_2 = '333e0a1e27815d0ceee55c473fe3dc93d56c63e3bee2b3b4aee8eed6d70191a3'
operators_hash_3 = '6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d'
operators_hash_4 = 'df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c'
operators_hash_5 = '3f39d5c348e5b79d06e842c114e6cc571583bbf44e4b0ebfda1a01ec05745d43'
operators_hash_6 = '559aead08264d5795d3909718cdd05abd49572e84fe55590eef31a88a08fdffd'
assert isinstance(operators_list, list), "Review the data type of operators_list."
assert len(operators_list) == 7, "The number of strings is incorrect."
assert operators_hash_0 == hashlib.sha256(bytes(operators_list[0], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_1 == hashlib.sha256(bytes(operators_list[1], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_2 == hashlib.sha256(bytes(operators_list[2], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_3 == hashlib.sha256(bytes(operators_list[3], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_4 == hashlib.sha256(bytes(operators_list[4], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_5 == hashlib.sha256(bytes(operators_list[5], encoding='utf8')).hexdigest(), "At least one value is incorrect."
assert operators_hash_6 == hashlib.sha256(bytes(operators_list[6], encoding='utf8')).hexdigest(), "At least one value is incorrect."
print("Your solution is correct!")
```
### 1.3) if statements¶
The candidates' colors are assigned using the following Python code.
<img src="./media/if_code.png" />
Please use your logic, rather than executing the code, and decide which colors are assigned to the following players:
* Kistapho
* Vilawi
* Baxla
* Qintari
```
# Assign to values to the following variables to show what would be the outcomeif the code above was executed.
# kistapho_color = ...
# vilawi_color = ...
# baxla_color = ...
# qintari_color = ...
# YOUR CODE HERE
raise NotImplementedError()
kistapho_color_hash = "16477688c0e00699c6cfa4497a3612d7e83c532062b64b250fed8908128ed548"
vilawi_color_hash = "8e0a1b0ada42172886fd1297e25abf99f14396a9400acbd5f20da20289cff02f"
baxla_color_hash = "c685a2c9bab235ccdd2ab0ea92281a521c8aaf37895493d080070ea00fc7f5d7"
qintari_color_hash = "b1f51a511f1da0cd348b8f8598db32e61cb963e5fc69e2b41485bf99590ed75a"
assert isinstance(kistapho_color, str), "Variable kistapho_color should be a string."
assert isinstance(vilawi_color, str), "Variable vilawi_color should be a string."
assert isinstance(baxla_color, str), "Variable baxla_color should be a string."
assert isinstance(qintari_color, str), "Variable qintari_color should be a string."
assert kistapho_color_hash == hashlib.sha256(bytes(kistapho_color, encoding='utf8')).hexdigest(), "The value of kistapho_color is incorrect."
assert vilawi_color_hash == hashlib.sha256(bytes(vilawi_color, encoding='utf8')).hexdigest(), "The value of vilawi_color is incorrect."
assert baxla_color_hash == hashlib.sha256(bytes(baxla_color, encoding='utf8')).hexdigest(), "The value of baxla_color is incorrect."
assert qintari_color_hash == hashlib.sha256(bytes(qintari_color, encoding='utf8')).hexdigest(), "The value of qintari_cbolor is incorrect."
print("Your solution is correct!")
```
## Part 2 - While & For Loops
In order to qualify for Game X, the candidates must meet the following requirements:
* Height >= 800cm
* 5 < BMI < 20
### 2.1) Calculate BMI
Read candidates' characteristics from `candidate_dict`, and calculate their `BMI` using the following formula.
Update the dictionary with their BMI using the key `bmi`. **Pay attention to the unit**, convert if necessary.
$$BMI = \frac{Weight}{Height^{2}} $$
where
Weight is measured in kg
Height is measured in meters
```
# Calculate BMI for each candidate and add the results to candidate_dict.
# YOUR CODE HERE
raise NotImplementedError()
assert len(candidate_dict["Squindle"]) == 5, "The dictionary is missing some candidate characteristics"
assert set(candidate_dict["Squindle"].keys()) == set(['age', 'height', 'strength', 'weight', 'bmi']), "The dictionary has mismatched candidate characteristics"
assert math.isclose(candidate_dict["Squindle"]["bmi"], 2.91, abs_tol=0.01), "The BMI for Squindle is incorrect."
assert math.isclose(candidate_dict["Flort"]["bmi"], 5.12, abs_tol=0.01), "The BMI for Flort is incorrect."
assert math.isclose(candidate_dict["Hambula"]["bmi"], 6.43, abs_tol=0.01), "The BMI for Hambula is incorrect."
assert math.isclose(candidate_dict["Kistapho"]["bmi"], 2.52, abs_tol=0.01), "The BMI for Kistapho is incorrect."
assert math.isclose(candidate_dict["Vilawi"]["bmi"], 7.92, abs_tol=0.01), "The BMI for Vilawi is incorrect."
assert math.isclose(candidate_dict["Anhaular"]["bmi"], 13.31, abs_tol=0.01), "The BMI for Anhaular is incorrect."
assert math.isclose(candidate_dict["Qintari"]["bmi"], 1.74, abs_tol=0.01), "The BMI for Qintari is incorrect."
assert math.isclose(candidate_dict["Wendu"]["bmi"], 5555.56, abs_tol=0.01), "The BMI for Wendu is incorrect."
assert math.isclose(candidate_dict["Baxla"]["bmi"], 6.06, abs_tol=0.01), "The BMI for Baxla is incorrect."
assert math.isclose(candidate_dict["Uluetto"]["bmi"], 19.66, abs_tol=0.01), "The BMI for Uluetto is incorrect."
assert math.isclose(candidate_dict["Reomult"]["bmi"], 5.67, abs_tol=0.01), "The BMI for Reomult is incorrect."
print("Your solution is correct!")
```
### 2.2) Power Potion to Increase Height
Baby `Wendu` really wants to compete in the game. She is very advanced for her age, but her height doesn´t meet the minimum requirement for Game X (i.e. `height >= 800cm`). Luckily there is a power potion that she can inhale that will double her height every minute.
Use a loop to calculate the `minimum time` she needs to inhale the power potion to reach the height requirement. Update the `candidates dictionary` with her new `height` and `BMI`.
Note - If you want to re-execute the cell after it completes successfully the first time around, you may need to reset the dictionary by re-running the dictionary creation cell in exercise 1.1
```
# Calculate the minimum time for Wendu to reach the required height for Game X,
# and store it in the variable time_taken
# update candidate dictionary for Wendu
# time_taken = ...
# candidate_dict["Wendu"]["height"] = ...
# candidate_dict["Wendu"]["bmi"] = ...
# YOUR CODE HERE
raise NotImplementedError()
time_hash = '19581e27de7ced00ff1ce50b2047e7a567c76b1cbaebabe5ef03f7c3017bb5b7'
height_hash = "b51e45a12fbae3d0ee2bf77f1a4f80cbf642e2b4d1c237d2c0f7053a54f6b388"
assert isinstance(candidate_dict["Wendu"]["height"], int), "Review the data type of Wendu's height."
assert isinstance(candidate_dict["Wendu"]["bmi"], float), "Review the data type of Wendu's BMI."
assert isinstance(time_taken, int), "Review the data type of the time taken."
assert time_hash == hashlib.sha256(json.dumps(time_taken).encode()).hexdigest(), "The value of time taken is incorrect."
assert height_hash == hashlib.sha256(json.dumps(candidate_dict["Wendu"]["height"]).encode()).hexdigest(), "The value of Wendu's height is incorrect."
assert math.isclose(candidate_dict["Wendu"]["bmi"], 0.021, abs_tol=0.001), "The BMI for Wendu is incorrect."
print("Your solution is correct!")
```
### 2.3) Balls of Hopstow to Increase Weight
Now baby Wendu needs to gain some weight, as her BMI falls well below the requirement (`5 < BMI < 20`).
In Planet X, eating `Balls of Hopstow` increases a person’s weight by 10 kg. How many balls of Hopstow does Wendu need to eat, in order to enter the game? Update the `candidates dictionary` with her new `weight` and her new `BMI`.
Hint: Try using a while loop
```
# This cell contains the original characteristics of Wendu.
# Do not run it after executing your solution,
# unless you want to reset to the original values.
wendu_weight_original = candidate_dict["Wendu"]["weight"]
wendu_bmi_original = candidate_dict["Wendu"]["bmi"]
# Calculate the number of balls of Hopstow Wendu needs to consume, and store it in the variable num_balls.
# keep track of Wendu's BMI changes by storing each value in bmi_list, starting with her original BMI
# update candidate_dict with Wendu's new changes
# candidate_dict["Wendu"]["weight"] = wendu_weight_original
# candidate_dict["Wendu"]["bmi"] = wendu_bmi_original
# bmi_list = [...]
# num_balls = ...
# YOUR CODE HERE
raise NotImplementedError()
num_balls_hash = '85daaf6f7055cd5736287faed9603d712920092c4f8fd0097ec3b650bf27530e'
bmi_length_hash = "3038bfb575bee6a0e61945eff8784835bb2c720634e42734678c083994b7f018"
weight_hash = "efec9aaf21433bf806e7681de337cac7dbecfbf17b22ad3bcdfe5e46e564f32f"
assert isinstance(num_balls, int), "Review the data type of the time taken."
assert isinstance(candidate_dict["Wendu"]["weight"], int), "Review the data type of Wendu's weight."
assert isinstance(candidate_dict["Wendu"]["bmi"], float), "Review the data type of Wendu's BMI."
assert num_balls_hash == hashlib.sha256(json.dumps(num_balls).encode()).hexdigest(), "The value of num_balls is incorrect."
assert bmi_length_hash == hashlib.sha256(json.dumps(len(bmi_list)).encode()).hexdigest(), "The number of items in bmi_list is incorrect."
assert weight_hash == hashlib.sha256(json.dumps(candidate_dict["Wendu"]["weight"]).encode()).hexdigest(), "The weight for Wendu is incorrect."
assert math.isclose(candidate_dict["Wendu"]["bmi"], 5.02, abs_tol=0.01), "The BMI for Wendu is incorrect."
print("Your solution is correct!")
```
### 2.4) Calculate the Prize Pool
The prize pool for Game X consists of `100 coins`, where the value of each coin `increases by 3` incrementally, i.e. the first coin is worth 1 point, the second is worth 4 points, the third is worth 7 points, etc.
What is the total value of the prize pool?
```
# store the value in the variable prize_pool
# prize_pool = ...
# YOUR CODE HERE
raise NotImplementedError()
pool_hash = '4bd6fb30bbbaba340fb0267845a29e13ac60300746152fb9ce3fc7434586e207'
assert isinstance(prize_pool, int), "Review the data type of the prize_pool."
assert pool_hash == hashlib.sha256(json.dumps(prize_pool).encode()).hexdigest(), "The value of prize_pool is incorrect."
print("Your solution is correct!")
```
## Part 3 - Using Conditionals and Loops together
### 3.1) Identify the Players that Qualify
Using the `candidates dictionary` above, which of the candidates now satisfy the following criteria and thus qualify?
* Height >= 800cm
* 5 < BMI < 20
Use a loop and store their names in the `player_list`, and then sort them in `alphabetical order`.
```
# Create a list named player_list to store the names of players that qualify for Game X
# player_list = [...]
# YOUR CODE HERE
raise NotImplementedError()
player1_hash = "a15517dc509fb31c108fa7e7453ae7725b1e70956ac6479a45dd9cc556e92d44"
player2_hash = "f2321c68d5b7f5aaf3017399211553131019ca3700b0576241b8ad9a7a4e0fcb"
player3_hash = "02a0cd7425a15c4c5dd325bc95d1be6c9db50575107503281914e0e6adbbcf79"
player4_hash = "278fd67d9c9894d3f3d541cb49740ec712c182de1690cd309749d70f1a7da69b"
player5_hash = "3f01098b72c556509334cbd8fe6a10d20022bdca86e23e9d8ce44ee5c86b8ad8"
player6_hash = "f9d5d62c984e539fcda4a91f02eb091064a8a8e682ca8d90aa89ed62b2d53a85"
player7_hash = "8eab190b0881c802bb2118b40b5a88b159970dea8f77ee7de8c03056d47811e5"
player8_hash = "318b32300a722c4e1088019997d7b3c3fbef82efeebdb4124901d29192ff7419"
player_len_hash = "2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3"
assert isinstance(player_list, list), "Variable player_list should be a list."
assert player_len_hash == hashlib.sha256(json.dumps(len(player_list)).encode()).hexdigest(), "The number of items in player_list is incorrect."
assert player1_hash == hashlib.sha256(bytes(player_list[0], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player2_hash == hashlib.sha256(bytes(player_list[1], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player3_hash == hashlib.sha256(bytes(player_list[2], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player4_hash == hashlib.sha256(bytes(player_list[3], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player5_hash == hashlib.sha256(bytes(player_list[4], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player6_hash == hashlib.sha256(bytes(player_list[5], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player7_hash == hashlib.sha256(bytes(player_list[6], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
assert player8_hash == hashlib.sha256(bytes(player_list[7], encoding='utf8')).hexdigest(), "At least one value in player_list is incorrect."
print("Your solution is correct!")
```
### 3.2) Climb that Wall
Now that we have a `list of the players`, the game begins! The first challenge is a climbing competition. The players will be given `5 minutes` to ascend the `Wall to Nowhere`.
Unfortunately their climbing rate is proportional to their height. Players shorter than `1200 cm` can climb **$\frac{1}{2}$ of their height** per minute while the other players climbs at the rate of **$\frac{1}{3}$ of their height** per minute.
As a show of respect for the elders, players older than `500 years` start at `15 meters` above ground.
This can be solved using a simple mathematical equation, however, you can also try to implement it as a `for loop` challenge.
Store the result of each player in the dictionary `climb_dict_5min`, with the player names being the top level key. ie. `climb_dict_5min["Wendu"]` shows the height achieved, **in centimeters**, by Wendu in the 5 minute period. Please `round down` your answers to the nearest integer values. Hint - check out the `math.floor()` function.
```
# Create a dictionary named climb_dict_5min that uses the player names as the keys
# to store the height they reached after 5 minutes
# climb_dict_5min = ...
# YOUR CODE HERE
raise NotImplementedError()
player1_hash = "9e088cddb90e91e1ec3e4cae2aee41bd65d74434c60749d67e12fa74d5de9642"
player2_hash = "8d0c7eec258a5cfd81e86404ef98ee05d1a1aef3bf2f5b6e82815cc951497a49"
player3_hash = "2a6a41cdfcbe78c1f94c27f244b17071896f60dc16d5cb3a75708d9cac85c3ff"
player4_hash = "7e3db2c53a88a81408d72790cd323f8a8a27fafe588c2fab87e4a4d3f437228c"
player5_hash = "d15eff82084cddbc1b35df821f1a75d41977d9a7a0a0499f4accf528a8fb88f7"
player6_hash = "2d2c33c52e3df492704c04881c254aa93f37fc0b89dfbafaa9687340e0696ea9"
player7_hash = "8e3330aeb5e96211f56a9521e80ab8b0a841921b73bc64131abee2701dff81eb"
player8_hash = "9c67d3b75b1e8364898454889940f9cb70b4022532d0aaaf1f1d979504b149f4"
dict_len_hash = "2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3"
assert isinstance(climb_dict_5min, dict), "Variable climb_dict_5min should be a dictionary."
assert dict_len_hash == hashlib.sha256(json.dumps(len(climb_dict_5min)).encode()).hexdigest(), "The number of items in climb_dict_5min is incorrect."
assert player1_hash == hashlib.sha256(json.dumps(climb_dict_5min["Flort"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player2_hash == hashlib.sha256(json.dumps(climb_dict_5min["Hambula"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player3_hash == hashlib.sha256(json.dumps(climb_dict_5min["Vilawi"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player4_hash == hashlib.sha256(json.dumps(climb_dict_5min["Anhaular"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player5_hash == hashlib.sha256(json.dumps(climb_dict_5min["Wendu"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player6_hash == hashlib.sha256(json.dumps(climb_dict_5min["Baxla"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player7_hash == hashlib.sha256(json.dumps(climb_dict_5min["Uluetto"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
assert player8_hash == hashlib.sha256(json.dumps(climb_dict_5min["Reomult"]).encode()).hexdigest(), "At least one value in climb_dict_5min is incorrect."
print("Your solution is correct!")
```
### 3.3) Identify Top Two Players
At the end of 5 minutes, what is the sequence of players? Store this answer in the list `climb_result_5min`, with the first element in the list representing the player coming first.
Finally, which two players finished on top? Store their names separately as `first_player` and `second_player`.
Hint - you may want to use the function `sorted()` (https://stackoverflow.com/questions/20944483/python-3-sort-a-dict-by-its-values)
```
# Create a list named climb_result_5min to store the results of the climbing competiton, with the first player
# being the first item in the list
# Then store the names of the first two players in separate variables
# climb_result_5min = [...]
#first_player = [...]
#second_player = [...]
# YOUR CODE HERE
raise NotImplementedError()
player1_hash = "3f01098b72c556509334cbd8fe6a10d20022bdca86e23e9d8ce44ee5c86b8ad8"
player2_hash = "8eab190b0881c802bb2118b40b5a88b159970dea8f77ee7de8c03056d47811e5"
player3_hash = "a15517dc509fb31c108fa7e7453ae7725b1e70956ac6479a45dd9cc556e92d44"
player4_hash = "318b32300a722c4e1088019997d7b3c3fbef82efeebdb4124901d29192ff7419"
player5_hash = "278fd67d9c9894d3f3d541cb49740ec712c182de1690cd309749d70f1a7da69b"
player6_hash = "f9d5d62c984e539fcda4a91f02eb091064a8a8e682ca8d90aa89ed62b2d53a85"
player7_hash = "f2321c68d5b7f5aaf3017399211553131019ca3700b0576241b8ad9a7a4e0fcb"
player8_hash = "02a0cd7425a15c4c5dd325bc95d1be6c9db50575107503281914e0e6adbbcf79"
num_result_hash = "2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3"
assert isinstance(climb_result_5min, list), "Review the data type of climb_result_5min."
assert num_result_hash == hashlib.sha256(json.dumps(len(climb_result_5min)).encode()).hexdigest(), "The number of items in team_1 is incorrect."
assert player1_hash == hashlib.sha256(bytes(climb_result_5min[0], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player2_hash == hashlib.sha256(bytes(climb_result_5min[1], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player3_hash == hashlib.sha256(bytes(climb_result_5min[2], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player4_hash == hashlib.sha256(bytes(climb_result_5min[3], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player5_hash == hashlib.sha256(bytes(climb_result_5min[4], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player6_hash == hashlib.sha256(bytes(climb_result_5min[5], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player7_hash == hashlib.sha256(bytes(climb_result_5min[6], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player8_hash == hashlib.sha256(bytes(climb_result_5min[7], encoding='utf8')).hexdigest(), "At least one value in climb_result_5min is incorrect."
assert player1_hash == hashlib.sha256(bytes(first_player, encoding='utf8')).hexdigest(), "The name of first_player is incorrect."
assert player2_hash == hashlib.sha256(bytes(second_player, encoding='utf8')).hexdigest(), "The name of second_player is incorrect."
print("Your solution is correct!")
```
### 3.4) Fill that Jar (Relay) - Optional (ungraded)
Two teams have been formed, with the following composition:
team_1 : ['Reomult', 'Anhaular', 'Wendu', 'Uluetto']
team_2 : ['Vilawi', 'Flort', 'Hambula', 'Baxla']
They enter a relay-style challenge, each team needs to fill a tank with `1000 litres of water`.
Following the player sequence given above, the players have 1 minute each to carry jars of water from a starting point to their tank. It so happens that the amount each player manages to carry is proportional to their `strength * weight`, i.e.
$$ water filled = player strength * player weight * \frac{1}{50} $$
After each player has had a turn, the first player starts again. This process continues until the entire tank has been filled.
Use loops and conditionals to determine if `team_1` or `team_2` wins, who is the `last player in the winning team` and `how many fills` did it take.
```
# Determine which is the winning team, the last player in the winning team for the relay,
# and how many fills did it take
# To verify that your program is working correctly, it might be helpful to store the player names in each round
# as well as using lists to capture the change of water levels for each team
team_1 = ['Reomult', 'Anhaular', 'Wendu', 'Uluetto']
team_2 = ['Vilawi', 'Flort', 'Hambula', 'Baxla']
# winning_team = ... team_1 or team_2
# last_player = ...
# number_of_fillls = ...
# YOUR CODE HERE
raise NotImplementedError()
winning_team_hash = "7c832b8fdf7414a64a178f34053eca97053f3f6336d5d4efdfe1bd5f7c8c5f8b"
last_player_hash = "3f01098b72c556509334cbd8fe6a10d20022bdca86e23e9d8ce44ee5c86b8ad8"
num_fills_hash = "ef2d127de37b942baad06145e54b0c619a1f22327b2ebbcfbec78f5564afe39d"
assert isinstance(winning_team, str), "Review the data type of winning_team."
assert isinstance(last_player, str), "Review the data type of last_player."
assert isinstance(number_of_fills, int), "Review the data type of number_of_fills."
assert winning_team_hash == hashlib.sha256(bytes(winning_team, encoding='utf8')).hexdigest(), "winning_team is incorrect."
assert last_player_hash == hashlib.sha256(bytes(last_player, encoding='utf8')).hexdigest(), "last_player is incorrect."
assert num_fills_hash == hashlib.sha256(json.dumps(number_of_fills).encode()).hexdigest(), "The number_of_fills is incorrect."
print("Your solution is correct!")
```
## Part 4 - List Comprehension
### 4.1) Identify Digits in Strings
The prize gallery in Game X is a labyrinth with `1000 rooms` numbered from `0 to 999`. The playmaker informs the team that the prizes are stored in a `room with room number divisible by 3`.
Use List Comprehension to save a list of rooms potentially containing prizes, in the list `possible_prize_rooms`. The list should contain the rooms in ascending order.
Hint - use `str()` to convert a value from an integer to a string.
Hint 2 - if you are new to list comprehension, it may help to write out how you would implement the solution using a traditional for loop first. It could be easier to convert it to the list comprehension after you nailed down the logic.
```
# Use list comprehension to identify rooms potentially containing prizes
# possible_prize_rooms = [...]
# YOUR CODE HERE
raise NotImplementedError()
num_results_hash = "058d5d43bf485bf78dda1ed4eaf8b78e3106f3c6364c625ead2cc3aeb1908237"
result1_hash = "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9"
result2_hash = "624b60c58c9d8bfb6ff1886c2fd605d2adeb6ea4da576068201b6c6958ce93f4"
result3_hash = "284b7e6d788f363f910f7beb1910473e23ce9d6c871f1ce0f31f22a982d48ad4"
result4_hash = "5d85be4cc5af40a7cf2c4f0818d92689c185fdea6566745ef26305d80413f483"
assert isinstance(possible_prize_rooms, list), "Variable possible_prize_rooms should be a list."
assert num_results_hash == hashlib.sha256(json.dumps(len(possible_prize_rooms)).encode()).hexdigest(), "The number of items in possible_prize_rooms is incorrect."
assert result1_hash == hashlib.sha256(json.dumps(possible_prize_rooms[0]).encode()).hexdigest(), "At least one value in possible_prize_rooms is incorrect."
assert result2_hash == hashlib.sha256(json.dumps(possible_prize_rooms[10]).encode()).hexdigest(), "At least one value in possible_prize_rooms is incorrect."
assert result3_hash == hashlib.sha256(json.dumps(possible_prize_rooms[200]).encode()).hexdigest(), "At least one value in possible_prize_rooms is incorrect."
assert result4_hash == hashlib.sha256(json.dumps(possible_prize_rooms[270]).encode()).hexdigest(), "At least one value in possible_prize_rooms is incorrect."
print("Your solution is correct!")
```
### 4.2) Challenge Question on List Comprehension - optional (ungraded)
Some rooms in the prize gallery contain some nasty bugs and explosives, and you would want to avoid them. The playmaker kindly informs the team that these rooms are divisible by at least one digit from 4 to 9. Use nested list comprehension to find a list of buggy rooms and store them in `possible_buggy_rooms`
```
# Use list comprehension to identify potentially buggy rooms
# possible_buggy_rooms = [...]
# YOUR CODE HERE
raise NotImplementedError()
num_results_hash = "4299da7466df09516d290f7a99c8b7a2fa94766eb94a61c24e1ce8f6ca80af44"
result1_hash = "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9"
result2_hash = "4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a"
result3_hash = "4771bef2c04a34b548b77ea7581cf821152d9dea9c2c85151a07856fe3639314"
result4_hash = "83cf8b609de60036a8277bd0e96135751bbc07eb234256d4b65b893360651bf2"
assert isinstance(possible_buggy_rooms, list), "Variable possible_prize_rooms should be a list."
assert num_results_hash == hashlib.sha256(json.dumps(len(possible_buggy_rooms)).encode()).hexdigest(), "The number of items in possible_buggy_rooms is incorrect."
assert result1_hash == hashlib.sha256(json.dumps(possible_buggy_rooms[0]).encode()).hexdigest(), "At least one value in possible_buggy_rooms is incorrect."
assert result2_hash == hashlib.sha256(json.dumps(possible_buggy_rooms[1]).encode()).hexdigest(), "At least one value in possible_buggy_rooms is incorrect."
assert result3_hash == hashlib.sha256(json.dumps(possible_buggy_rooms[300]).encode()).hexdigest(), "At least one value in possible_buggy_rooms is incorrect."
assert result4_hash == hashlib.sha256(json.dumps(possible_buggy_rooms[580]).encode()).hexdigest(), "At least one value in possible_buggy_rooms is incorrect."
print("Your solution is correct!")
```
### 4.3) Symmetric matrix
As a final challenge, the playmaker asks the players to create the following list of lists (matrix) using List Comprehension.
$
\begin{bmatrix}
\begin{bmatrix}0 & 1 & 2 & 3 & 4\end{bmatrix},\\
\begin{bmatrix}1 & 2 & 3 & 4 & 5\end{bmatrix},\\
\begin{bmatrix}2 & 3 & 4 & 5 & 6\end{bmatrix},\\
\begin{bmatrix}3 & 4 & 5 & 6 & 7\end{bmatrix},\\
\begin{bmatrix}4 & 5 & 6 & 7 & 8\end{bmatrix}\space\\
\end{bmatrix}
$
```
# Create the list of lists above using list comprehension.
# The result should be a list assigned to matrix_list.
# matrix_list = ...
# YOUR CODE HERE
raise NotImplementedError()
matrix_hash = 'ef2d127de37b942baad06145e54b0c619a1f22327b2ebbcfbec78f5564afe39d'
assert isinstance(matrix_list, list), "Review the data type of temp_popsicle."
assert matrix_hash == hashlib.sha256(json.dumps(len(matrix_list)).encode()).hexdigest(), "The length of matrix_list is incorrect."
assert matrix_hash == hashlib.sha256(json.dumps(len(matrix_list[0])).encode()).hexdigest(), "The length of the first element of matrix_list is incorrect."
assert matrix_hash == hashlib.sha256(json.dumps(matrix_list[2][3]).encode()).hexdigest(), "Some values are incorrect."
assert matrix_hash == hashlib.sha256(json.dumps(matrix_list[3][2]).encode()).hexdigest(), "Some values are incorrect."
print("Your solution is correct!")
```
### Extra fact:
There is a [story](http://mathandmultimedia.com/2010/09/15/sum-first-n-positive-integers/) about how [Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss#Anecdotes) discovered a formula to calculate the sum of the first `n` positive integers. If there is a formula or a simple calculation that you can perform to avoid extensive code, why not use it instead?
# Submit your work!
To submit your work, [follow the steps here, in the step "Grading the Exercise Notebook"!](https://github.com/LDSSA/ds-prep-course-2022#22---working-on-the-learning-units)
| github_jupyter |
```
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.layers import Reshape, Concatenate, Flatten, Lambda
from keras.models import Model
from keras.losses import binary_crossentropy, kullback_leibler_divergence
from keras.optimizers import Adam
from keras.preprocessing.image import load_img, img_to_array, ImageDataGenerator
from keras import backend as K
import matplotlib.pyplot as plt
import json
import glob
from sklearn.model_selection import train_test_split
import numpy as np
from io import BytesIO
import PIL
from IPython.display import clear_output, Image, display, HTML
def load_icons(train_size=0.85):
icon_index = json.load(open('icons/index.json'))
x = []
img_rows, img_cols = 32, 32
for icon in icon_index:
if icon['name'].endswith('_filled'):
continue
img_path = 'icons/png32/%s.png' % icon['name']
img = load_img(img_path, grayscale=True, target_size=(img_rows, img_cols))
img = img_to_array(img)
x.append(img)
x = np.asarray(x) / 255
x_train, x_val = train_test_split(x, train_size=train_size)
return x_train, x_val
x_train, x_test = load_icons()
x_train.shape, x_test.shape
x_train.shape
def create_autoencoder():
input_img = Input(shape=(32, 32, 1))
channels = 4
x = input_img
for i in range(5):
left = Conv2D(channels, (3, 3), activation='relu', padding='same')(x)
right = Conv2D(channels, (2, 2), activation='relu', padding='same')(x)
conc = Concatenate()([left, right])
x = MaxPooling2D((2, 2), padding='same')(conc)
channels *= 2
x = Dense(channels)(x)
for i in range(5):
x = Conv2D(channels, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
channels //= 2
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
return autoencoder
autoencoder = create_autoencoder()
autoencoder.summary()
from keras.callbacks import TensorBoard
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
cols = 25
idx = np.random.randint(x_test.shape[0], size=cols)
sample = x_test[idx]
decoded_imgs = autoencoder.predict(sample)
decoded_imgs.shape
def decode_img(tile):
tile = tile.reshape(tile.shape[:-1])
tile = np.clip(tile * 255, 0, 255)
return PIL.Image.fromarray(tile)
overview = PIL.Image.new('RGB', (cols * 36 + 4, 64 + 12), (128, 128, 128))
for idx in range(cols):
overview.paste(decode_img(sample[idx]), (idx * 36 + 4, 4))
overview.paste(decode_img(decoded_imgs[idx]), (idx * 36 + 4, 40))
f = BytesIO()
overview.save(f, 'png')
display(Image(data=f.getvalue()))
def augment(icons):
aug_icons = []
for icon in icons:
for flip in range(4):
for rotation in range(4):
aug_icons.append(icon)
icon = np.rot90(icon)
icon = np.fliplr(icon)
return np.asarray(aug_icons)
x_train_aug = augment(x_train)
x_test_aug = augment(x_test)
x_train_aug.shape
from keras.callbacks import TensorBoard
autoencoder = create_autoencoder()
autoencoder.fit(x_train_aug, x_train_aug,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test_aug, x_test_aug),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
cols = 25
idx = np.random.randint(x_test.shape[0], size=cols)
sample = x_test[idx]
decoded_imgs = autoencoder.predict(sample)
decoded_imgs.shape
def decode_img(tile, factor=1.0):
tile = tile.reshape(tile.shape[:-1])
tile = np.clip(tile * 255, 0, 255)
return PIL.Image.fromarray(tile)
overview = PIL.Image.new('RGB', (cols * 32, 64 + 20), (128, 128, 128))
for idx in range(cols):
overview.paste(decode_img(sample[idx]), (idx * 32, 5))
overview.paste(decode_img(decoded_imgs[idx]), (idx * 32, 42))
f = BytesIO()
overview.save(f, 'png')
display(Image(data=f.getvalue()))
batch_size = 250
latent_space_depth = 128
def sample_z(args):
z_mean, z_log_var = args
eps = K.random_normal(shape=(batch_size, latent_space_depth), mean=0., stddev=1.)
return z_mean + K.exp(z_log_var / 2) * eps
def VariationalAutoEncoder(num_pixels):
input_img = Input(shape=(32, 32, 1))
channels = 4
x = input_img
for i in range(5):
left = Conv2D(channels, (3, 3), activation='relu', padding='same')(x)
right = Conv2D(channels, (2, 2), activation='relu', padding='same')(x)
conc = Concatenate()([left, right])
x = MaxPooling2D((2, 2), padding='same')(conc)
channels *= 2
x = Dense(channels)(x)
encoder_hidden = Flatten()(x)
z_mean = Dense(latent_space_depth, activation='linear')(encoder_hidden)
z_log_var = Dense(latent_space_depth, activation='linear')(encoder_hidden)
def KL_loss(y_true, y_pred):
return 0.0001 * K.sum(K.exp(z_log_var) + K.square(z_mean) - 1 - z_log_var, axis=1)
def reconstruction_loss(y_true, y_pred):
y_true = K.batch_flatten(y_true)
y_pred = K.batch_flatten(y_pred)
return binary_crossentropy(y_true, y_pred)
def total_loss(y_true, y_pred):
return reconstruction_loss(y_true, y_pred) + KL_loss(y_true, y_pred)
z = Lambda(sample_z, output_shape=(latent_space_depth, ))([z_mean, z_log_var])
decoder_in = Input(shape=(latent_space_depth,))
d_x = Reshape((1, 1, latent_space_depth))(decoder_in)
e_x = Reshape((1, 1, latent_space_depth))(z)
for i in range(5):
conv = Conv2D(channels, (3, 3), activation='relu', padding='same')
upsampling = UpSampling2D((2, 2))
d_x = conv(d_x)
d_x = upsampling(d_x)
e_x = conv(e_x)
e_x = upsampling(e_x)
channels //= 2
final_conv = Conv2D(1, (3, 3), activation='sigmoid', padding='same')
auto_decoded = final_conv(e_x)
decoder_out = final_conv(d_x)
decoder = Model(decoder_in, decoder_out)
auto_encoder = Model(input_img, auto_decoded)
auto_encoder.compile(optimizer=Adam(lr=0.001),
loss=total_loss,
metrics=[KL_loss, reconstruction_loss])
return auto_encoder, decoder
var_auto_encoder, decoder = VariationalAutoEncoder(32)
var_auto_encoder.summary()
decoder.summary()
def truncate_to_batch(x):
l = x.shape[0]
return x[:l - l % batch_size, :, :, :]
x_train_trunc = truncate_to_batch(x_train)
x_test_trunc = truncate_to_batch(x_test)
x_train_trunc.shape, x_test_trunc.shape
var_auto_encoder.fit(x_train_trunc, x_train_trunc, verbose=1,
batch_size=batch_size, epochs=100,
validation_data=(x_test_trunc, x_test_trunc))
random_number = np.asarray([[np.random.normal()
for _ in range(latent_space_depth)]])
img_width, img_height = 32, 32
def decode_img(a):
a = np.clip(a * 256, 0, 255).astype('uint8')
return PIL.Image.fromarray(a)
decode_img(decoder.predict(random_number).reshape(img_width, img_height))
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(140, 128, 128))
for x in range(num_cells):
for y in range(num_cells):
vec = np.asarray([[np.random.normal() * 1.4
for _ in range(latent_space_depth)]])
decoded = decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
def truncate_to_batch(x):
l = x.shape[0]
return x[:l - l % batch_size, :, :, :]
x_train_trunc = truncate_to_batch(x_train_aug)
x_test_trunc = truncate_to_batch(x_test_aug)
x_train_trunc.shape, x_test_trunc.shape
var_auto_encoder.fit(x_train_trunc, x_train_trunc, verbose=1,
batch_size=batch_size, epochs=100,
validation_data=(x_test_trunc, x_test_trunc))
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(140, 128, 128))
for x in range(num_cells):
for y in range(num_cells):
vec = np.asarray([[np.random.normal() * 1.2
for _ in range(latent_space_depth)]])
decoded = decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(140, 128, 128))
for x in range(num_cells):
for y in range(num_cells):
vec = np.asarray([[ - (i % 2) * (x - 4.5) / 3 + ((i + 1) % 2) * (y - 4.5) / 3
for i in range(latent_space_depth)]])
decoded = decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
vec = np.asarray([[np.random.normal()
for _ in range(latent_space_depth)]])
vec.shape
vec
np.asarray([[np.random.normal()
for _ in range(latent_space_depth)]])
```
| github_jupyter |
# Lite-HRNet Inference Notebook
#### Code has been hidden for brevity. Use the button below to show/hide code
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Show/Hide Code"></form>''')
%matplotlib inline
import argparse
import os
import torch
import mmcv
import json
import numpy as np
import cv2
import ipywidgets as widgets
from PIL import Image, ImageFont, ImageDraw
from matplotlib import pyplot as plt
from mmcv import Config
from mmcv.cnn import fuse_conv_bn
from mmcv.runner import load_checkpoint
from models import build_posenet
from IPython.display import clear_output
from ipywidgets import HBox, Label, Layout
from IPython.display import Javascript, display
from ipywidgets import widgets
def run_cells_below(b):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1, IPython.notebook.get_selected_index()+2)'))
VALID_IMG_TYPES = ['jpg','jpeg', 'png']
txt_input_img = widgets.Text(
value= "../data/ESTEC_Images/image_45.png",
placeholder="Input image path",
layout= Layout(width='60%')
)
HBox([Label('Input image path:'),txt_input_img])
#../data/Envisat+IC1/val2/2000-2999/frame2005.jpg
txt_prefix = widgets.Text(
value= "image_",
placeholder="Bounding Box data file",
layout= Layout(width='60%')
)
HBox([Label('Filename prefix:'),txt_prefix])
txt_bbox = widgets.Text(
value= "../data/ESTEC_Images/bbox.txt",
placeholder="Bounding Box data file",
layout= Layout(width='60%')
)
HBox([Label('Bounding box data file:'),txt_bbox])
#../data/Envisat+IC1/val2.json
img_path = os.path.abspath(txt_input_img.value)
bbox_path = os.path.abspath(txt_bbox.value)
inputs= {}
img_name = img_path.split("/")[-1]
img_id= int((img_name.split(txt_prefix.value)[-1]).split('.')[0])
button = widgets.Button(description="Load Input Image and Data")
output1 = widgets.Output()
output2 = widgets.Output()
display(button, output1, output2)
def load_image_gt(b):
## LOAD IMAGE
with output1:
print("Loading... This might take a few seconds depending on the image size.")
with output2:
input_img = Image.open(img_path).convert('RGB')
inputs['image'] = input_img
## LOAD DATA
data_ext = bbox_path.split('.')[-1]
if data_ext.lower() == 'txt':
bbox_f = open(bbox_path,'r')
raw_data = bbox_f.readlines()
this_idx = img_id-14 ## TODO: This is very bad
xyxy = [int(float(v)) for v in (raw_data[this_idx]).split(' ')]
#relax margins for pure bbox in real images
ext_scale = 0.10 # percent of bbox h/w
w = abs(xyxy[2]-xyxy[0])
h = abs(xyxy[3]-xyxy[1])
inputs['bbox'] = [int(xyxy[0]+ (w*ext_scale)), int(xyxy[1]+ (h*ext_scale)), int(xyxy[2]- (w*ext_scale)), int(xyxy[3]-(h*ext_scale))]
box_thickness = 10
elif data_ext.lower() =='json':
with open(bbox_path, 'r') as jf:
coco_data = json.load(jf)
coco_anns_idx = {d["id"]:j for j,d in enumerate(coco_data['images'])}
this_ann = coco_data['annotations'][coco_anns_idx[img_id]]
if this_ann['image_id'] == img_id:
xyxy = this_ann['bbox']
inputs['bbox']= [int(xyxy[0]), int(xyxy[1]), int(xyxy[0]+xyxy[2]), int(xyxy[1]+xyxy[3])]
box_thickness = 2
else:
Exception("Image ID did not match between annotation and query. There might be a problem with the index map in \"coco_anns_idx\"")
else:
Exception("Error: Data File Format Invalid")
## Visualize GT image/bbox
vis_img = np.array(input_img)
xyxy = inputs['bbox']
cv2.rectangle(vis_img, (xyxy[0], xyxy[1]), (xyxy[2], xyxy[3]), (255,255,255), box_thickness)
plt.figure(figsize=(10,10))
plt.imshow(vis_img)
plt.show()
# RESET CAPTION
with output1:
clear_output()
print(f"Input Image: {img_name}; ID: {img_id}; Bounding box: {xyxy}")
button.on_click(load_image_gt)
```
### Histogram Equalization with CLAHE
```
orig_img = cv2.imread(img_path)
np_img = cv2.cvtColor(orig_img, cv2.COLOR_RGB2GRAY)
clahe = cv2.createCLAHE(clipLimit=5,tileGridSize=(10,10))
equalized_clahe= clahe.apply(np_img)
equalized_clahe = cv2.cvtColor(equalized_clahe,cv2.COLOR_GRAY2RGB)
fig_hist = plt.figure(figsize=(16,16))
#Original Image
ax221 = plt.subplot(221)
ax221.title.set_text('Original Image')
plt.imshow(orig_img)
#CLAHE boosted Image
ax222 = plt.subplot(222)
ax222.title.set_text('post-CLAHE Image')
plt.imshow(equalized_clahe)
#Original Histogram
ax223 =plt.subplot(223)
ax223.title.set_text("Original Histogram")
plt.hist(np_img.flatten(),256,[0,256], color = 'r')
plt.xlim([0,256])
plt.ylim([-1e6, 1.5e7])
#Histogram after CLAHE
ax224 =plt.subplot(224)
ax224.title.set_text("post-CLAHE Histogram")
plt.hist(equalized_clahe.flatten(),256,[0,256], color = 'r')
plt.xlim([0,256])
plt.ylim([-1e6, 1.5e7])
plt.show()
inputs['image'] = equalized_clahe
#inputs['image'] = orig_img
```
-------------------
# Inference
## Initialize Network Model
```
txt_config = widgets.Text(
value= "augmentations_litehrnet_18_coco_256x256_Envisat+IC",
placeholder="Config",
layout= Layout(width='90%')
)
HBox([Label('Config Name:'),txt_config])
config_name = txt_config.value
config_path = f"configs/top_down/lite_hrnet/Envisat/{config_name}.py"
ckpt_path = f"work_dirs/{config_name}/best.pth"
```
### Overwrite mmpose library methods to enable inference on sat images
```
from mmpose.apis.inference import LoadImage,_box2cs
from mmpose.datasets.pipelines import Compose
import mmpose.apis.inference as inference_module
from mmcv.parallel import collate, scatter
def new_inference_single_pose_model(model,
img_or_path,
bbox,
dataset,
return_heatmap=False):
"""Inference a single bbox.
num_keypoints: K
Args:
model (nn.Module): The loaded pose model.
img_or_path (str | np.ndarray): Image filename or loaded image.
bbox (list | np.ndarray): Bounding boxes (with scores),
shaped (4, ) or (5, ). (left, top, width, height, [score])
dataset (str): Dataset name.
outputs (list[str] | tuple[str]): Names of layers whose output is
to be returned, default: None
Returns:
ndarray[Kx3]: Predicted pose x, y, score.
heatmap[N, K, H, W]: Model output heatmap.
"""
cfg = model.cfg
device = next(model.parameters()).device
# build the data pipeline
channel_order = cfg.test_pipeline[0].get('channel_order', 'rgb')
test_pipeline = [LoadImage(channel_order=channel_order)
] + cfg.test_pipeline[1:]
test_pipeline = Compose(test_pipeline)
assert len(bbox) in [4, 5]
center, scale = _box2cs(cfg, bbox)
flip_pairs = None
if dataset == 'TopDownEnvisatCocoDataset':
flip_pairs = []
else:
raise NotImplementedError()
# prepare data
data = {
'img_or_path':
img_or_path,
'center':
center,
'scale':
scale,
'bbox_score':
bbox[4] if len(bbox) == 5 else 1,
'dataset':
dataset,
'joints_3d':
np.zeros((cfg.data_cfg.num_joints, 3), dtype=np.float32),
'joints_3d_visible':
np.zeros((cfg.data_cfg.num_joints, 3), dtype=np.float32),
'rotation':
0,
'ann_info': {
'image_size': cfg.data_cfg['image_size'],
'num_joints': cfg.data_cfg['num_joints'],
'flip_pairs': flip_pairs
}
}
data = test_pipeline(data)
data = collate([data], samples_per_gpu=1)
if next(model.parameters()).is_cuda:
# scatter to specified GPU
data = scatter(data, [device])[0]
else:
# just get the actual data from DataContainer
data['img_metas'] = data['img_metas'].data[0]
# forward the model
with torch.no_grad():
result = model(
img=data['img'],
img_metas=data['img_metas'],
return_loss=False,
return_heatmap=return_heatmap)
return result['preds'][0], result['output_heatmap']
```
### Infer
```
#Overwrite mmpose module function for COCO-sat inference
inference_module._inference_single_pose_model = new_inference_single_pose_model
device = 'cuda' if torch.cuda.is_available() else None
inputs['name'] = img_name
inputs['id'] = img_id
img_data_list_dict = [inputs]
cfg = Config.fromfile(config_path)
model = build_posenet(cfg.model)
load_checkpoint(model, ckpt_path, map_location='cpu')
model = inference_module.init_pose_model(config_path, ckpt_path, device=device)
#gray_img = cv2.cvtColor(np.array(Image.open(img_path)), cv2.COLOR_RGB2GRAY)
#input_img = np.stack((gray_img,)*3, axis=-1)
results, heatmaps= inference_module.inference_top_down_pose_model(model, inputs['image'], img_data_list_dict, return_heatmap=True, format='xyxy', dataset='TopDownEnvisatCocoDataset')
```
### Single image inference Result
```
hms =heatmaps[0]['heatmap']
result = results[0]
keypoints = ([np.array([v[0],v[1]]) for v in result['keypoints']])
#print(f"Heatmaps array: {np.shape(hms)}")
#print(f"Result: \n Type : {type(result)}\n \n Keypoints: {keypoints}")
import matplotlib.pyplot as plt
plt.figure(figsize=(12,12))
plt.scatter(*zip(*keypoints))
plt.imshow(result['image'])
plt.show()
```
### Heatmaps
```
n_hms = np.shape(hms)[1]
f, axarr = plt.subplots(3, 4, figsize=(15,15))
this_col=0
for idx in range(n_hms):
this_hm = hms[0,idx,:,:]
row = idx % 4
this_ax = axarr[this_col, row]
this_ax.set_title(f'{idx}')
hm_display = this_ax.imshow(this_hm, cmap='jet', vmin=0, vmax=1)
if row == 3:
this_col += 1
cb=f.colorbar(hm_display, ax=axarr)
import random
random.randint(1,10)
import torchvision.transforms as T
pil_img = Image.open(img_path)
transform = T.Compose([T.ToTensor(),T.RandomErasing(scale=[0.01,0.01]),T.ToPILImage()])
imgout = transform(pil_img)
plt.imshow(imgout)
type(model)
for idx, m in enumerate(model.backbone.stage1.modules()):
print(idx,'-->', type(m))
len(model.backbone.stage1.modules())
```
| github_jupyter |
### Make figure 1
```
import os
import sys
sys.path.append("../") # go to parent dir
import glob
import time
import pathlib
import logging
import numpy as np
from scipy.sparse import linalg as spla
from dedalus.tools.config import config
from simple_sphere import SimpleSphere, TensorField, TensorSystem
import equations
import matplotlib.pyplot as plt
%matplotlib inline
import cartopy.crs as ccrs
from dedalus.extras import plot_tools
import logging
from mpl_toolkits import mplot3d
%matplotlib inline
%config InlineBackend.figure_format = 'png'
plt.rc('text', usetex=True)
#add path to data folder
input_folder = "/Users/Rohit/Documents/research/active_matter_spheres/scripts/data/sphere3"
output_folder = "/Users/Rohit/Documents/research/active_matter_spheres/scripts/garbage"
dpi=300
ind = 500
with np.load(os.path.join(input_folder, 'output_%i.npz' %(ind))) as file:
phi = file['phi']
theta = file['theta']
L_max = len(theta)-1
S_max = 4
om = file['om']
time = file['t'][0]
pos1 = [0.1, 0.1, 0.25, 0.8]
pos2 = [0.4, 0.1, 0.25, 0.8]
pos3 = [0.7, 0.1, 0.25, 0.8]
fig = plt.figure(figsize=(3,1), dpi=dpi, tight_layout=True)
plt.rc('font', size=5)
###################
proj = ccrs.Orthographic(central_longitude=0, central_latitude=0)
ax1 = plt.axes(pos1, projection=proj)
lon = phi * 180 / np.pi
lat = (np.pi/2 - theta) * 180 / np.pi
xmesh, ymesh = plot_tools.quad_mesh(lon, lat)
ax1.pcolormesh(xmesh, ymesh, om.T, cmap='jet', transform=ccrs.PlateCarree())
fig.set_facecolor((0, 0, 0))
###################
proj = ccrs.Orthographic(central_longitude=0, central_latitude=0)
ax2 = plt.axes(pos2, projection=proj)
lon = phi * 180 / np.pi
lat = (np.pi/2 - theta) * 180 / np.pi
xmesh, ymesh = plot_tools.quad_mesh(lon, lat)
ax2.pcolormesh(xmesh, ymesh, om.T, cmap='RdBu_r', transform=ccrs.PlateCarree())
###################
ax3 = plt.axes(pos3, projection=proj)
proj = ccrs.Orthographic(central_longitude=0, central_latitude=0)
ax2 = plt.axes(pos2, projection=proj)
lon = phi * 180 / np.pi
lat = (np.pi/2 - theta) * 180 / np.pi
xmesh, ymesh = plot_tools.quad_mesh(lon, lat)
ax3.pcolormesh(xmesh, ymesh, om.T, cmap='RdBu_r', transform=ccrs.PlateCarree())
from mayavi import mlab
mlab.init_notebook()
import numpy as np
from scipy.special import sph_harm
# Create a sphere
r = 0.3
pi = np.pi
cos = np.cos
sin = np.sin
phi, theta = np.mgrid[0:pi:101j, 0:2 * pi:101j]
x = r * sin(phi) * cos(theta)
y = r * sin(phi) * sin(theta)
z = r * cos(phi)
s = sph_harm(0, 10, theta, phi).real
mlab.figure(1, bgcolor=(0.5, 0.5, 0.5), fgcolor=(1, 1, 1), size=(400, 300))
mlab.clf()
m = mlab.mesh(x, y, z, scalars=s, colormap='jet')
m = mlab.mesh(x, y+0.8, z, scalars=s, colormap='jet')
m = mlab.mesh(x, y+1.6, z, scalars=s, colormap='jet')
mlab.view(0, 90, distance=3)
m
#mlab.show()
# Represent spherical harmonics on the surface of the sphere
#for n in range(1, 1):
# for m in range(n):
# s = sph_harm(m, n, theta, phi).real
# mlab.mesh(x - m, y - n, z, scalars=s, colormap='jet')
# s[s < 0] *= 0.97
# s /= s.max()
# mlab.mesh(s * x - m, s * y - n, s * z + 1.3,
# scalars=s, colormap='Spectral')
#mlab.view(90, 70, 6.2, (-1.3, -2.9, 0.25))
#mlab.show()
```
| github_jupyter |
```
# this mounts your Google Drive to the Colab VM.
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# enter the foldername in your Drive where you have saved the unzipped
# assignment folder, e.g. 'cs231n/assignments/assignment3/'
FOLDERNAME = 'cs231n/assignment2'
assert FOLDERNAME is not None, "[!] Enter the foldername."
# now that we've mounted your Drive, this ensures that
# the Python interpreter of the Colab VM can load
# python files from within it.
import sys
sys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))
# this downloads the CIFAR-10 dataset to your Drive
# if it doesn't already exist.
%cd drive/My\ Drive/$FOLDERNAME/cs231n/datasets/
!bash get_datasets.sh
%cd /content
```
# Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a `forward` and a `backward` function. The `forward` function will receive inputs, weights, and other parameters and will return both an output and a `cache` object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
""" Receive inputs x and weights w """
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the `cache` object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
"""
Receive dout (derivative of loss with respect to outputs) and cache,
and compute derivative with respect to inputs.
"""
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch/Layer Normalization as a tool to more efficiently optimize deep networks.
```
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
```
# Affine layer: forward
Open the file `cs231n/layers.py` and implement the `affine_forward` function.
Once you are done you can test your implementaion by running the following:
```
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around e-9 or less.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
```
# Affine layer: backward
Now implement the `affine_backward` function and test your implementation using numeric gradient checking.
```
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around e-10 or less
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# ReLU activation: forward
Implement the forward pass for the ReLU activation function in the `relu_forward` function and test your implementation using the following:
```
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be on the order of e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
```
# ReLU activation: backward
Now implement the backward pass for the ReLU activation function in the `relu_backward` function and test your implementation using numeric gradient checking:
```
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be on the order of e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
```
## Inline Question 1:
We've only asked you to implement ReLU, but there are a number of different activation functions that one could use in neural networks, each with its pros and cons. In particular, an issue commonly seen with activation functions is getting zero (or close to zero) gradient flow during backpropagation. Which of the following activation functions have this problem? If you consider these functions in the one dimensional case, what types of input would lead to this behaviour?
1. Sigmoid
2. ReLU
3. Leaky ReLU
## Answer:
1. The sigmoid function gets saturated very quickly so gradient vanishing can happen for either very large or very small values.
2. ReLu is zero for any negative input, which will cause gradient vanishing.
3. Leaky ReLu barely has the issue of gradient vanishing due to having a piecewise constant slope.
# "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file `cs231n/layer_utils.py`.
For now take a look at the `affine_relu_forward` and `affine_relu_backward` functions, and run the following to numerically gradient check the backward pass:
```
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
# Relative error should be around e-10 or less
print('Testing affine_relu_forward and affine_relu_backward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in `cs231n/layers.py`.
You can make sure that the implementations are correct by running the following:
```
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be around the order of e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be close to 2.3 and dx error should be around e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
```
# Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file `cs231n/classifiers/fc_net.py` and complete the implementation of the `TwoLayerNet` class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
```
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
# Errors should be around e-7 or less
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
```
# Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file `cs231n/solver.py` and read through it to familiarize yourself with the API. After doing so, use a `Solver` instance to train a `TwoLayerNet` that achieves at least `50%` accuracy on the validation set.
```
model = TwoLayerNet()
solver = Solver(model,data,update_rule='sgd',
optim_config={
'learning_rate': 5e-4,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=200)
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
solver.train()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
```
# Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the `FullyConnectedNet` class in the file `cs231n/classifiers/fc_net.py`.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch/layer normalization; we will add those features soon.
## Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-7 or less.
```
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
# Most of the errors should be on the order of e-7 or smaller.
# NOTE: It is fine however to see an error for W2 on the order of e-5
# for the check when reg = 0.0
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
```
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. In the following cell, tweak the **learning rate** and **weight initialization scale** to overfit and achieve 100% training accuracy within 20 epochs.
```
# TODO: Use a three-layer Net to overfit 50 training examples by
# tweaking just the learning rate and initialization scale.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-1 # Experiment with this!
learning_rate = 1e-3 # Experiment with this!
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
```
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again, you will have to adjust the learning rate and weight initialization scale, but you should be able to achieve 100% training accuracy within 20 epochs.
```
# TODO: Use a five-layer Net to overfit 50 training examples by
# tweaking just the learning rate and initialization scale.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3 # Experiment with this!
weight_scale = 1e-1 # Experiment with this!
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
```
## Inline Question 2:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? In particular, based on your experience, which network seemed more sensitive to the initialization scale? Why do you think that is the case?
## Answer:
TODO: figure this part out for realz.
probably 5 layers is more sensetive since it can represent much more complex functions. This will lead to many more local optimum points and convergence will vary given different initialization.
# Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
# SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochastic gradient descent. See the Momentum Update section at http://cs231n.github.io/neural-networks-3/#sgd for more information.
Open the file `cs231n/optim.py` and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function `sgd_momentum` and run the following to check your implementation. You should see errors less than e-8.
```
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
# Should see relative errors around e-8 or less
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
```
Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
```
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 5e-3,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.items():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label="loss_%s" % update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label="train_acc_%s" % update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label="val_acc_%s" % update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
# RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file `cs231n/optim.py`, implement the RMSProp update rule in the `rmsprop` function and implement the Adam update rule in the `adam` function, and check your implementations using the tests below.
**NOTE:** Please implement the _complete_ Adam update rule (with the bias correction mechanism), not the first simplified version mentioned in the course notes.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
```
# Test RMSProp implementation
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
# You should see relative errors around e-7 or less
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
# You should see relative errors around e-7 or less
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
```
Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
```
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 3:
AdaGrad, like Adam, is a per-parameter optimization method that uses the following update rule:
```
cache += dw**2
w += - learning_rate * dw / (np.sqrt(cache) + eps)
```
John notices that when he was training a network with AdaGrad that the updates became very small, and that his network was learning slowly. Using your knowledge of the AdaGrad update rule, why do you think the updates would become very small? Would Adam have the same issue?
## Answer:
update become very small because the cache, which is a monotonically increasing function, is used in the update's rule denominator. This means that with each step of AdaGrad the step size is getting smaller!
Adam is less likely to have this issue because of the decaying factor for updating v and m
# Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the `best_model` variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the `BatchNormalization.ipynb` and `Dropout.ipynb` notebooks before completing this part, since those techniques can help you train powerful models.
```
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# find batch/layer normalization and dropout useful. Store your best model in #
# the best_model variable. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
#('X_train: ', (49000, 3, 32, 32))
#('y_train: ', (49000,))
#('X_val: ', (1000, 3, 32, 32))
#('y_val: ', (1000,))
model = FullyConnectedNet([100, 50, 20, 20, 50, 100])
solver = Solver(model, data,
print_every=50, num_epochs=20, batch_size=1000,
update_rule='adam',
optim_config={
'learning_rate': 2e-3,
},
lr_decay=0.95,
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.plot(solver.loss_history, 'o')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.plot(solver.train_acc_history, '-o')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.plot(solver.val_acc_history, '-o')
plt.gcf().set_size_inches(15, 15)
plt.show()
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE #
################################################################################
```
# Test your model!
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
```
best_model = model
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
```
| github_jupyter |
```
import os
def get_filePath_list(file_dir):
'''
Get the list of all files within file_dir, including those under subdirs
: param file_dir
: return the file list
'''
filePath_list = []
for walk in os.walk(file_dir):
part_filePath_list = [os.path.join(walk[0], file) for file in walk[2]]
filePath_list.extend(part_filePath_list)
return filePath_list
def get_files_list(file_dir, postfix='ALL'):
'''
Get the list of all files in file_dir whose postfix is the postfix, including those under subdirs
: param file_dir
: param postfix
: return file list
'''
postfix = postfix.split('.')[-1]
file_list = []
filePath_list = get_filePath_list(file_dir)
if postfix == 'ALL':
file_list = filePath_list
else:
for file in filePath_list:
basename=os.path.basename(file)
postfix_basename = basename.split('.')[-1]
if postfix_basename == postfix:
file_list.append(file)
file_list.sort()
return file_list
import jieba
def segment_files(file_list, segment_out_dir, stopwords=[]):
'''
Segment out all the words from the source documents
: param file_list
: param segment_out_dir
: param stopwords
'''
for i, file in enumerate(file_list):
segment_out_name = os.path.join(segment_out_dir, 'segment_{}.txt'.format(i))
with open(file, 'rb') as fin:
document = fin.read()
document_cut = jieba.cut(document)
sentence_segment=[]
for word in document_cut:
if word not in stopwords:
sentence_segment.append(word)
result = ' '.join(sentence_segment)
result = result.encode('utf-8')
with open(segment_out_name, 'wb') as fout:
fout.write(result)
#source and segment dir
source_folder = './three_kingdoms/source'
segment_folder = './three_kingdoms/segment'
file_list = get_files_list(source_folder, postfix='*.txt')
segment_files(file_list, segment_folder)
from gensim.models import word2vec
import multiprocessing
sentences = word2vec.PathLineSentences(segment_folder)
for i, sentence in enumerate(sentences):
if i < 100:
print(sentence)
print('Train word2vec model with {} CPUs'.format(multiprocessing.cpu_count()))
model = word2vec.Word2Vec(sentences, size=128, window=5, min_count = 5, workers=multiprocessing.cpu_count())
if not os.path.exists('models'):
try:
os.makedirs('models')
except OSError as e:
if e.errno != errno.EEXIST:
raise
# save model
model.save('./models/word2Vec.model')
print(model.wv.most_similar('曹操'))
print(model.wv.most_similar(positive=['曹操', '刘备'], negative=['张飞']))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import mixture, decomposition, manifold, cluster, metrics
from collections import defaultdict
dfs = []
for i in range(2006,2020):
dfs.append(np.load('tables_{}.pkl'.format(i),allow_pickle=True))
pv = defaultdict(list)
poses = defaultdict(list)
vorps = defaultdict(list)
initial_list = set()
for idx,df in enumerate(dfs):
for team in df.values():
for row in team['advanced'].itertuples(): #advanced
if row[3] > 400:#row[2]*row[4] > 400:
v = row[6:-10] #advanced
#v = row[5:-3] #per_poss
pv[row[0]].append(np.array([_ if _!='' else 0 for _ in v])) #-6 # -11
poses[row[0]].append(team['roster'].loc[row[0]]['Pos'])
vorps[row[0]].append(row[-1])
row
pva = {k: np.array(v).mean(0) for k,v in pv.items() }
t = np.stack(pva.values())
#t = np.stack(sum(pv.values(),[]))
#t = np.nan_to_num(t)
#t = t[:,[0,4,5,9]]
t = (t-t.mean(0))/t.std(0)
print(t.shape)
import umap
pca_fit = decomposition.PCA()
res = pca_fit.fit_transform(t)
res = umap.UMAP().fit_transform(t)
#res = manifold.TSNE(perplexity=15).fit_transform(t)
print(pca_fit.components_[:2])
team['advanced'].columns[5:-10]
plt.scatter(res[:,0],res[:,1],s=4)
for n_clusters in range(2,5):
clusterer = cluster.KMeans(n_clusters=n_clusters,n_init=10 )
cluster_labels = clusterer.fit_predict(t)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = metrics.silhouette_score(t, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
clusterer = cluster.KMeans(n_clusters=3,n_init=10 )
cluster_labels = clusterer.fit_predict(t)
pose_short = np.array([max(set(lst), key=lst.count) for _,lst in poses.items()])
plt.figure(dpi=400)
for i in np.unique(cluster_labels):
plt.scatter(res[np.where(cluster_labels==i),0],res[np.where(cluster_labels==i),1],s=4)
for i,name in enumerate(pva.keys()):
if np.array(vorps[name]).mean() > 2:
print(name,res[i])
plt.text(res[i,0],res[i,1],name.split()[1],ha='center',va='center',size=8)
#plt.xticks([],[])
#plt.yticks([],[])
plt.title('NBA Positions based on per-possession statistics (via UMAP)')
plt.tight_layout()
plt.savefig('pos3.png',facecolor='w',edgecolor='w')
plt.figure(dpi=400)
for i in ['PG','SG','SF','PF','C']:#np.unique(pose_short):
plt.scatter(res[np.where(pose_short==i),0],res[np.where(pose_short==i),1],s=4,label=i)
#for i,name in enumerate(pva.keys()):
# plt.text(res[i,0],res[i,1],name.split()[1],ha='center',va='center',size=3)
plt.legend(scatterpoints=4,markerscale=3)
plt.xticks([],[])
plt.yticks([],[])
plt.title('NBA Positions based on per-possession statistics (via UMAP)')
plt.tight_layout()
plt.savefig('pos2.png',facecolor='w',edgecolor='w')
clusterer.cluster_centers_
team['advanced'].iloc[:,5:-10].iloc[:,[0,4,5,9]]
len(pose_short),t.shape
```
| github_jupyter |
这里演示了使用逻辑回归的方式 进行识别 手写数字
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as spio
from scipy import optimize
from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r"c:\windows\fonts\simsun.ttc", size=14) # 解决windows环境下画图汉字乱码问题
% matplotlib inline
# 显示100个数字
def display_data(imgData):
sum = 0
'''
显示100个数(若是一个一个绘制将会非常慢,可以将要画的数字整理好,放到一个矩阵中,显示这个矩阵即可)
- 初始化一个二维数组
- 将每行的数据调整成图像的矩阵,放进二维数组
- 显示即可
'''
pad = 1
display_array = -np.ones((pad+10*(20+pad),pad+10*(20+pad)))
for i in range(10):
for j in range(10):
display_array[pad+i*(20+pad):pad+i*(20+pad)+20,pad+j*(20+pad):pad+j*(20+pad)+20] = (imgData[sum,:].reshape(20,20,order="F")) # order=F指定以列优先,在matlab中是这样的,python中需要指定,默认以行
sum += 1
plt.imshow(display_array,cmap='gray') # 显示灰度图像
plt.axis('off')
plt.show()
# 求每个分类的theta,最后返回所有的all_theta
def oneVsAll(X,y,num_labels,Lambda):
# 初始化变量
m,n = X.shape
all_theta = np.zeros((n+1,num_labels)) # 每一列对应相应分类的theta,共10列
X = np.hstack((np.ones((m,1)),X)) # X前补上一列1的偏置bias
class_y = np.zeros((m,num_labels)) # 数据的y对应0-9,需要映射为0/1的关系
initial_theta = np.zeros((n+1,1)) # 初始化一个分类的theta
# 映射y
for i in range(num_labels):
class_y[:,i] = np.int32(y==i).reshape(1,-1) # 注意reshape(1,-1)才可以赋值
#np.savetxt("class_y.csv", class_y[0:600,:], delimiter=',')
'''遍历每个分类,计算对应的theta值'''
for i in range(num_labels):
#optimize.fmin_cg
result = optimize.fmin_bfgs(costFunction, initial_theta, fprime=gradient, args=(X,class_y[:,i],Lambda)) # 调用梯度下降的优化方法
all_theta[:,i] = result.reshape(1,-1) # 放入all_theta中
all_theta = np.transpose(all_theta)
return all_theta
# S型函数
def sigmoid(Z):
A = 1.0/(1.0+np.exp(-Z))
return A
# 代价函数
def costFunction(initial_theta,X,y,inital_lambda):
m = len(y)
J = 0
h = sigmoid(np.dot(X,initial_theta)) # 计算h(z)
theta1 = initial_theta.copy() # 因为正则化j=1从1开始,不包含0,所以复制一份,前theta(0)值为0
theta1[0] = 0
temp = np.dot(np.transpose(theta1),theta1)
J = (-np.dot(np.transpose(y),np.log(h))-np.dot(np.transpose(1-y),np.log(1-h))+temp*inital_lambda/2)/m # 正则化的代价方程
return J
# 计算梯度
def gradient(initial_theta,X,y,inital_lambda):
m = len(y)
grad = np.zeros((initial_theta.shape[0]))
h = sigmoid(np.dot(X,initial_theta)) # 计算h(z)
theta1 = initial_theta.copy()
theta1[0] = 0
grad = np.dot(np.transpose(X),h-y)/m+inital_lambda/m*theta1 #正则化的梯度
return grad
# 预测
def predict_oneVsAll(all_theta,X):
m = X.shape[0]
num_labels = all_theta.shape[0]
p = np.zeros((m,1))
X = np.hstack((np.ones((m,1)),X)) # 在X最前面加一列1
h = sigmoid(np.dot(X,np.transpose(all_theta))) # 预测
'''
返回h中每一行最大值所在的列号
- np.max(h, axis=1)返回h中每一行的最大值(是某个数字的最大概率)
- 最后where找到的最大概率所在的列号(列号即是对应的数字)
'''
p = np.array(np.where(h[0,:] == np.max(h, axis=1)[0]))
for i in np.arange(1, m):
t = np.array(np.where(h[i,:] == np.max(h, axis=1)[i]))
p = np.vstack((p,t))
return p
data = spio.loadmat("../data/2-logistic_regression/data_digits.mat")
X = data['X'] # 获取X数据,每一行对应一个数字20x20px
y = data['y']
m,n = X.shape
num_labels = 10 # 数字个数,0-9
## 随机显示几行数据
rand_indices = [t for t in [np.random.randint(x-x, m) for x in range(100)]] # 生成100个0-m的随机数
display_data(X[rand_indices,:]) # 显示100个数字
Lambda = 0.1 # 正则化系数
#y = y.reshape(-1,1)
all_theta = oneVsAll(X, y, num_labels, Lambda) # 计算所有的theta
p = predict_oneVsAll(all_theta,X) # 预测
# 将预测结果和真实结果保存到文件中
#res = np.hstack((p,y.reshape(-1,1)))
#np.savetxt("predict.csv", res, delimiter=',')
print(u"预测准确度为:%f%%" % np.mean((p == y.reshape(-1,1))*100))
```
| github_jupyter |
# CrashDS
#### Module 4 : Bias Variance
Dataset from ISLR by *James et al.* : `Advertising.csv`
Source: https://www.statlearning.com/resources-first-edition
---
### Essential Libraries
Let us begin by importing the essential Python Libraries.
You may install any library using `conda install <library>`.
Most of the libraries come by default with the Anaconda platform.
> NumPy : Library for Numeric Computations in Python
> Pandas : Library for Data Acquisition and Preparation
> Matplotlib : Low-level library for Data Visualization
> Seaborn : Higher-level library for Data Visualization
```
# Basic Libraries
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt # we only need pyplot
sb.set() # set the default Seaborn style for graphics
```
We will also need the essential Python libraries for (basic) Machine Learning.
Scikit-Learn (`sklearn`) will be our de-facto Machine Learning library in Python.
> `LinearRegression` model from `sklearn.linear_model` : Our main model for Regression
> `train_test_split` method from `sklearn.model_selection` : Random Train-Test splits
> `mean_squared_error` metric from `sklearn.metrics` : Primary performance metric for us
```
# Import essential models and functions from sklearn
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
```
---
## Case Study : Advertising Budget vs Sales
### Import the Dataset
The dataset is in CSV format; hence we use the `read_csv` function from Pandas.
Immediately after importing, take a quick look at the data using the `head` function.
```
# Load the CSV file and check the format
advData = pd.read_csv('Advertising.csv')
advData.head()
```
Check the vital statistics of the dataset using the `type` and `shape` attributes.
Check the variables (and their types) in the dataset using the `info()` method.
```
print("Data type : ", type(advData))
print("Data dims : ", advData.shape)
advData.info()
```
### Format the Dataset
Drop the `Unnamed: 0` column as it contributes nothing to the problem.
Rename the other columns for homogeneity in nomenclature and style.
Check the format and vital statistics of the modified dataframe.
```
# Drop the first column (axis = 1) by its name
advData = advData.drop('Unnamed: 0', axis = 1)
# Rename the other columns as per your choice
advData = advData.rename(columns={"TV": "TV", "radio": "RD", "newspaper" : "NP", "sales" : "Sales"})
# Check the modified dataset
advData.info()
```
### Explore Mutual Relationship of Variables
So far, we never considered the *mutual dependence* of the variables. Let us study that for a moment.
```
sb.pairplot(data = advData, height = 2)
# Heatmap of the Correlation Matrix
cormat = advData.corr()
f, axes = plt.subplots(1, 1, figsize=(16, 12))
sb.heatmap(cormat, vmin = -1, vmax = 1, linewidths = 1,
annot = True, fmt = ".2f", annot_kws = {"size": 18}, cmap = "RdBu")
plt.show()
```
### Create a Function for Linear Regression
Let us write a generic function to perform Linear Regression, as before.
Our Predictor variable(s) will be $X$ and the Response variable will be $Y$.
> Regression Model : $y$ = $a$ $X$ + $b$
> Train data : (`X_Train`, `y_train`)
> Test data : (`X_test`, `y_test`)
```
def performLinearRegression(X_train, y_train, X_test, y_test):
'''
Function to perform Linear Regression with X_Train, y_train,
and test out the performance of the model on X_Test, y_test.
'''
linreg = LinearRegression() # create the linear regression object
linreg.fit(X_train, y_train) # train the linear regression model
# Predict Response corresponding to Predictors
y_train_pred = linreg.predict(X_train)
y_test_pred = linreg.predict(X_test)
# Plot the Predictions vs the True values
f, axes = plt.subplots(1, 2, figsize=(16, 8))
axes[0].scatter(y_train, y_train_pred, color = "blue")
axes[0].plot(y_train, y_train, 'w-', linewidth = 1)
axes[0].set_xlabel("True values of the Response Variable (Train)")
axes[0].set_ylabel("Predicted values of the Response Variable (Train)")
axes[1].scatter(y_test, y_test_pred, color = "green")
axes[1].plot(y_test, y_test, 'w-', linewidth = 1)
axes[1].set_xlabel("True values of the Response Variable (Test)")
axes[1].set_ylabel("Predicted values of the Response Variable (Test)")
plt.show()
# Check the Goodness of Fit (on Train Data)
print("Goodness of Fit of Model \tTrain Dataset")
print("Explained Variance (R^2) \t", linreg.score(X_train, y_train))
print("Mean Squared Error (MSE) \t", mean_squared_error(y_train, y_train_pred))
print()
# Check the Goodness of Fit (on Test Data)
print("Goodness of Fit of Model \tTest Dataset")
print("Mean Squared Error (MSE) \t", mean_squared_error(y_test, y_test_pred))
print()
```
### Fit all possible Linear Models
There are 3 Predictors in this dataset, allowing us to create 7 different Regression models.
To compare various models, it's always a good strategy to use the same Train-Test split.
```
# Extract Response and Predictors
y = pd.DataFrame(advData["Sales"])
X = pd.DataFrame(advData[["TV", "RD", "NP"]])
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
# Fit all Linear Models with Train-Test
print("-------------------------------------------------------------------")
print("Regression : Sales vs TV")
performLinearRegression(X_train[['TV']], y_train, X_test[['TV']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs Radio")
performLinearRegression(X_train[['RD']], y_train, X_test[['RD']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs Newspaper")
performLinearRegression(X_train[['NP']], y_train, X_test[['NP']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs TV, Radio")
performLinearRegression(X_train[['TV', 'RD']], y_train, X_test[['TV', 'RD']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs Radio, Newspaper")
performLinearRegression(X_train[['RD', 'NP']], y_train, X_test[['RD', 'NP']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs TV, Newspaper")
performLinearRegression(X_train[['TV', 'NP']], y_train, X_test[['TV', 'NP']], y_test)
print()
print("-------------------------------------------------------------------")
print("Regression : Sales vs TV, Radio, Newspaper")
performLinearRegression(X_train[['TV', 'RD', 'NP']], y_train, X_test[['TV', 'RD', 'NP']], y_test)
print()
print("-------------------------------------------------------------------")
```
---
### Benchmark Linear Regression Models
Create a generic function that returns the performance figures instead of printing/plotting.
```
def genericLinearRegression(X_train, y_train, X_test, y_test):
'''
Function to perform Linear Regression with X_Train, y_train,
and test out the performance of the model on X_Test, y_test.
'''
# Create and Train the Linear Regression Model
linreg = LinearRegression()
linreg.fit(X_train, y_train)
# Predict Response corresponding to Predictors
y_train_pred = linreg.predict(X_train)
y_test_pred = linreg.predict(X_test)
# Return the Mean-Squared-Errors for Train-Test
return mean_squared_error(y_train, y_train_pred), mean_squared_error(y_test, y_test_pred)
```
Extract the Response variable and create all possible Subset of Predictors.
```
# Extract Response and Predictors
y = pd.DataFrame(advData["Sales"])
X = pd.DataFrame(advData[["TV", "RD", "NP"]])
# Predictors for the Linear Models
predSubsets = [['TV'], ['RD'], ['NP'],
['TV', 'RD'], ['RD', 'NP'], ['TV', 'NP'],
['TV', 'RD', 'NP']]
```
Run the generic Linear Regression function on each Subset of Predictors mutiple times.
```
# Storage for Performance Figures
performanceDict = dict()
# Random Trials
numTrial = 100
for trial in range(numTrial):
# Create a blank list for the Trial Row
performanceList = list()
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
# Fit all Linear Models with Train-Test
for index, preds in enumerate(predSubsets):
mseTrain, mseTest = genericLinearRegression(X_train[preds], y_train, X_test[preds], y_test)
performanceList.extend([mseTrain, mseTest])
# Append the Trial Row to the Dictionary
performanceDict[trial] = performanceList
# Convert the Dictionary to a DataFrame
performanceData = pd.DataFrame.from_dict(data = performanceDict, orient = 'index',
columns = ['Ttrain', 'Ttest', 'Rtrain', 'Rtest', 'Ntrain', 'Ntest',
'TRtrain', 'TRtest', 'RNtrain', 'RNtest', 'TNtrain', 'TNtest',
'TRNtrain', 'TRNtest'])
```
Check the Performance Figures of the 7 different Linear Regression Models.
```
performanceData.describe()
performanceData.boxplot(figsize=(16, 8), fontsize = 18, rot = 45)
```
### Interpretation
* How do you interpret the boxplots in the figure above?
* Which model do you think if the best out of all above?
* Why do you think the model is best? Justify carefully.
| github_jupyter |
## Train GPT on addition
Train a GPT model on a dedicated addition dataset to see if a Transformer can learn to add.
```
# set up logging
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# make deterministic
from mingpt.utils import set_seed
set_seed(42)
import numpy as np
import torch
import string
import os
from tqdm import tqdm
import torch.nn as nn
from torch.nn import functional as F
from mingpt.md import MemData
from mingpt.marker_dataset import MarkerDataset
from mingpt.math_dataset import MathDataset
%load_ext autoreload
%autoreload 2
# create a dataset
easy = 'run/numbers__place_value.txt'
medium = 'run/numbers__is_prime.txt'
hard = 'run/numbers__list_prime_factors.txt'
!rm -rf run
!cp -r data run
memory_slots = 7
MD = MemData(memory_slots)
MD.initiate_mem_slot_data(hard)
# create a dataset
easy_test = 'run/test_numbers__place_value.txt'
medium_test = 'run/test_numbers__is_prime.txt'
hard_test = 'run/test_numbers__list_prime_factors.txt'
easy_train = 'run/train_buffer_numbers__place_value.txt'
medium_train = 'run/train_buffer_numbers__is_prime.txt'
hard_train = 'run/train_buffer_numbers__list_prime_factors.txt'
train_dataset = MathDataset(fname=hard_train, MD=MD)
test_dataset = MathDataset(fname=hard_test, MD=MD)
MD.block_size
train_dataset[1000]
from mingpt.model import GPT, GPTConfig, GPT1Config
# initialize a baby GPT model
mconf = GPTConfig(MD.vocab_size, MD.block_size,
n_layer=2, n_head=4, n_embd=128)
model = GPT(mconf)
from mingpt.trainer import Trainer, TrainerConfig
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=1, batch_size=512, learning_rate=6e-4,
lr_decay=True, warmup_tokens=1024, final_tokens=50*len(train_dataset)*(14+1),
num_workers=0)
trainer = Trainer(model, train_dataset, test_dataset, tconf)
trainer.train()
trainer.save_checkpoint()
# now let's give the trained model an addition exam
from torch.utils.data.dataloader import DataLoader
from mingpt.examiner import Examiner
examiner = Examiner(MD)
examiner.exam(hard_train, train_dataset, trainer)
# training set: how well did we memorize?
examples = give_exam(test_dataset, batch_size=1, max_batches=-1)
print("Q: %s\nX:%s\nO:%s\n" % (examples[0][0], examples[0][1] , examples[0][2]))
for item in examples:
print("Question:", item[0])
print("X:", item[1])
print("Out:", item[2])
# test set: how well did we generalize?
give_exam(train_dataset, batch_size=1024, max_batches=1)
# well that's amusing... our model learned everything except 55 + 45
import itertools as it
f = ['-1', '-1', '2', '1', '1']
it.takewhile(lambda x: x!='2', f)
f
```
| github_jupyter |
## Title :
Pooling Mechanics
## Description :
The aim of this exercise is to understand the tensorflow.keras implementation of:
1. Max Pooling
2. Average Pooling
<img src="../fig/fig1.png" style="width: 500px;">
## Instructions:
- First, implement `Max Pooling` by building a model with a single MaxPooling2D layer. Print the output of this layer by using `model.predict()` to show the output.
- Next, implement `Average Pooling` by building a model with a single AvgPooling2D layer. Print the output of this layer by using `model.predict()` to show the output.
## Hints:
<a href="https://keras.io/api/layers/pooling_layers/max_pooling2d/" target="_blank">tf.keras.layers.MaxPooling2D()</a>
Max pooling operation for 2D spatial data.
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D" target="_blank">tf.keras.layers.AveragePooling2D()</a>
Average pooling operation for spatial data.
<a href="https://numpy.org/doc/stable/reference/generated/numpy.squeeze.html" target="_blank">np.squeeze()</a>
Remove single-dimensional entries from the shape of an array.
<a href="https://numpy.org/doc/stable/reference/generated/numpy.expand_dims.html" target="_blank">np.expand_dims()</a>
Add single-dimensional entries from the shape of an array.
Example: np.expand_dims (img, axis=(0,3))
```
# Import necessary libraries
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import MaxPool2D,AveragePooling2D,Input
from helper import plot_pool
# Load the 7x7 mnist image
img = np.load('3.npy')
plt.imshow(img,cmap = 'bone', alpha=0.5);
plt.axis('off');
plt.title('MNIST image of 3',fontsize=20);
```
### ⏸ Consider an input of size $(7,7)$ pixels.What will be the dimensions of the output if you use `pool_size=2`, `strides = 1` & `padding='valid'`?
#### A. $(5,5)$
#### B. $(6,6)$
#### C. $(4,4)$
#### D. $(7,7)$
```
### edTest(test_chow1) ###
# Submit an answer choice as a string below (eg. if you choose option C, put 'C')
answer1 = '___'
```
## Max Pooling
```
# Specify the variables for pooling
pool_size = ___
strides = ___
# Padding parameter can be 'valid', 'same', etc.
padding = '___'
# Build the model to perform maxpooling operation
model_1 = Sequential(name = 'MaxPool')
model_1.add(Input(shape = np.expand_dims(img,axis=2).shape))
model_1.add(MaxPool2D(pool_size = pool_size,strides=strides, padding=padding))
# Take a look at the summary to see the output shape
model_1.summary()
# Output the image using the model above
# Remember to use np.expand_dims to change input image dimensions
# to 4-d tensor because model_1.predict will not work on 2-d tensor
pooled_img = model_1.predict(___)
# Use the helper code to visualize the pooling operation
# np.squeeze() is used to bring the image to 2-dimension
# to use matplotlib to plot it
pooled_img = pooled_img.squeeze()
# plot_pool is a function that will return 3 plots to help visualize
# the pooling operation
plot_pool(img,pooled_img)
```
### ⏸ What if your stride is larger than your pool size?
#### A. Operation is invalid
#### B. Operation is valid but you will have an output larger than the input
#### C. Operation is valid but you will miss out on some pixels
#### D. Operation is valid but you will have an output as the same size as the input
```
### edTest(test_chow2) ###
# Submit an answer choice as a string below
# (eg. if you choose option C, put 'C')
answer2 = '___'
```
## Average Pooling
```
# Specify the variables for pooling
pool_size = ___
strides = ___
# Padding parameter can be 'valid', 'same', etc.
padding = '___'
# Build the model to perform average pooling operation
model_2 = Sequential(name = 'AveragePool')
model_2.add(Input(shape = np.expand_dims(img,axis=2).shape))
model_2.add(AveragePooling2D(pool_size = pool_size,strides=strides, padding=padding))
model_2.summary()
# Output the image using the model above
# Remember to use np.expand_dims to change input image dimensions
# to 4-d tensor because model_1.predict will not work on 2-d tensor
pooled_img = model_2.predict(___)
# Use the helper code to visualize the pooling operation
pooled_img = pooled_img.squeeze()
plot_pool(img,pooled_img)
```
### ⏸ Which among the following 2 pooling operation activates the input image more? Answer based on your results above.
#### A. Average pooling
#### B. Max pooling
```
### edTest(test_chow3) ###
# Submit an answer choice as a string below
# (eg. if you choose option A, put 'a')
answer3 = '___'
```
| github_jupyter |
# Анализ базы данных методом запросов SQL
***Цели исследования:***
* Проанализировать базу данных для того чтобы сформулировать ценностное предложение для нового продукта
* Изучить информацию о книгах, издательствах, авторах и пользовательских обзоров книг.
***Описание данных***
**Таблица `books`**
Содержит данные о книгах:
- `book_id` — идентификатор книги;
- `author_id` — идентификатор автора;
- `title` — название книги;
- `num_pages` — количество страниц;
- `publication_date` — дата публикации книги;
- `publisher_id` — идентификатор издателя.
**Таблица `authors`**
Содержит данные об авторах:
- `author_id` — идентификатор автора;
- `author` — имя автора.
**Таблица `publishers`**
Содержит данные об издательствах:
- `publisher_id` — идентификатор издательства;
- `publisher` — название издательства;
**Таблица `ratings`**
Содержит данные о пользовательских оценках книг:
- `rating_id` — идентификатор оценки;
- `book_id` — идентификатор книги;
- `username` — имя пользователя, оставившего оценку;
- `rating` — оценка книги.
**Таблица `reviews`**
Содержит данные о пользовательских обзорах на книги:
- `review_id` — идентификатор обзора;
- `book_id` — идентификатор книги;
- `username` — имя пользователя, написавшего обзор;
- `text` — текст обзора.
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Импортирование-библиотек" data-toc-modified-id="Импортирование-библиотек-1"><span class="toc-item-num">1 </span>Импортирование библиотек</a></span></li><li><span><a href="#Изучение-общей-информации" data-toc-modified-id="Изучение-общей-информации-2"><span class="toc-item-num">2 </span>Изучение общей информации</a></span></li><li><span><a href="#Задания" data-toc-modified-id="Задания-3"><span class="toc-item-num">3 </span>Задания</a></span><ul class="toc-item"><li><span><a href="#Посчитайте,-сколько-книг-вышло-после-1-января-2000-года;" data-toc-modified-id="Посчитайте,-сколько-книг-вышло-после-1-января-2000-года;-3.1"><span class="toc-item-num">3.1 </span>Посчитайте, сколько книг вышло после 1 января 2000 года;</a></span></li><li><span><a href="#Для-каждой-книги-посчитайте-количество-обзоров-и-среднюю-оценку;" data-toc-modified-id="Для-каждой-книги-посчитайте-количество-обзоров-и-среднюю-оценку;-3.2"><span class="toc-item-num">3.2 </span>Для каждой книги посчитайте количество обзоров и среднюю оценку;</a></span></li><li><span><a href="#Определите-издательство,-которое-выпустило-наибольшее-число-книг-толще-50-страниц-—-так-вы-исключите-из-анализа-брошюры;" data-toc-modified-id="Определите-издательство,-которое-выпустило-наибольшее-число-книг-толще-50-страниц-—-так-вы-исключите-из-анализа-брошюры;-3.3"><span class="toc-item-num">3.3 </span>Определите издательство, которое выпустило наибольшее число книг толще 50 страниц — так вы исключите из анализа брошюры;</a></span></li><li><span><a href="#Определите-автора-с-самой-высокой-средней-оценкой-книг-—-учитывайте-только-книги-с-50-и-более-оценками;" data-toc-modified-id="Определите-автора-с-самой-высокой-средней-оценкой-книг-—-учитывайте-только-книги-с-50-и-более-оценками;-3.4"><span class="toc-item-num">3.4 </span>Определите автора с самой высокой средней оценкой книг — учитывайте только книги с 50 и более оценками;</a></span></li><li><span><a href="#Посчитайте-среднее-количество-обзоров-от-пользователей,-которые-поставили-больше-50-оценок." data-toc-modified-id="Посчитайте-среднее-количество-обзоров-от-пользователей,-которые-поставили-больше-50-оценок.-3.5"><span class="toc-item-num">3.5 </span>Посчитайте среднее количество обзоров от пользователей, которые поставили больше 50 оценок.</a></span></li></ul></li></ul></div>
## Импортирование библиотек
```
import pandas as pd
from sqlalchemy import create_engine
# устанавливаем параметры
db_config = {'user': 'praktikum_student', # имяпользователя
'pwd': 'Sdf4$2;d-d30pp', # пароль
'host': 'rc1b-wcoijxj3yxfsf3fs.mdb.yandexcloud.net', 'port': 6432,# портподключения
'db': 'data-analyst-final-project-db'} # название базы данных
connection_string = 'postgresql://{}:{}@{}:{}/{}'.format(db_config['user'], db_config['pwd'],
db_config['host'], db_config['port'],
db_config['db'])
# сохраняем коннектор
engine = create_engine(connection_string, connect_args={'sslmode':'require'})
```
## Изучение общей информации
```
books = '''SELECT * FROM books LIMIT 5'''
pd.io.sql.read_sql(books, con = engine)
authors = '''SELECT * FROM authors LIMIT 5'''
pd.io.sql.read_sql(authors, con = engine)
ratings = '''SELECT * FROM ratings LIMIT 5'''
pd.io.sql.read_sql(ratings, con = engine)
reviews = '''SELECT * FROM reviews LIMIT 5'''
pd.io.sql.read_sql(reviews, con = engine)
publishers = '''SELECT * FROM publishers LIMIT 5'''
pd.io.sql.read_sql(publishers, con = engine)
```
Итого, у нас есть 5 таблиц: `books`, `authors`, `ratings`, `reviews` и `publishers`. В каждой из таблиц содержится информация о книгах, издательствах, авторах и пользовательских обзоров книг. Описание данных содержится в таблице брифа.
## Задания
### Посчитайте, сколько книг вышло после 1 января 2000 года;
```
task_1 = '''
SELECT COUNT(publication_date)
FROM books
WHERE publication_date >= '2000-01-01'
'''
books_released = pd.io.sql.read_sql(task_1, con = engine)
print('Количество книг, вышедших после 1 января 2000 года: {:.0f}'.format(books_released.iloc[0][0]))
```
После 1 января 2000 года вышло 821 книг.
### Для каждой книги посчитайте количество обзоров и среднюю оценку;
```
task_2 = '''
SELECT books.title AS book_title,
AVG (ratings.rating) AS avg_rating,
COUNT (DISTINCT reviews.review_id) AS cnt_review
FROM books
INNER JOIN ratings ON ratings.book_id = books.book_id
INNER JOIN reviews ON reviews.book_id = ratings.book_id
GROUP BY books.book_id
ORDER BY cnt_review DESC, avg_rating DESC
'''
avg_rating = pd.io.sql.read_sql(task_2, con = engine)
avg_rating.head(10)
# task_2_1 = '''
# SELECT title, avg_rating, cnt_review
# FROM books
# LEFT JOIN (SELECT ratings.book_id,
# AVG (ratings.rating) AS avg_rating
# FROM ratings
# GROUP BY ratings.book_id) AS sub_1 ON books.book_id = sub_1.book_id
# LEFT JOIN (SELECT reviews.book_id,
# COUNT (reviews.text) AS cnt_review
# FROM reviews
# GROUP BY reviews.book_id) AS sub_2 ON books.book_id = sub_2.book_id
# ORDER BY cnt_review DESC LIMIT 10
# '''
# avg_rating_2 = pd.io.sql.read_sql(task_2_1, con = engine)
# avg_rating_2.head(10)
```
Наибольшее количество обзоров вышло по книге Twilight - 7. При этом рейтинг у этой книги ниже остальных с большим количеством обзоров - всего 3,66.
### Определите издательство, которое выпустило наибольшее число книг толще 50 страниц — так вы исключите из анализа брошюры;
```
task_3 = '''
SELECT publisher AS publisher_name,
COUNT (books.book_id) AS cnt_books
FROM books
INNER JOIN publishers ON books.publisher_id = publishers.publisher_id
WHERE books.num_pages > 50
GROUP BY publisher
ORDER BY cnt_books DESC
'''
publisher_most_books = pd.io.sql.read_sql(task_3, con = engine)
publisher_most_books
print('Издательство', publisher_most_books.iloc[0, 0], 'выпустило наибольшее число книг -',
publisher_most_books.iloc[0, 1], 'шт. толще 50 страниц.' )
```
Лидирующим издательством, которое выпустило наибольшее число книг толще 50 страниц является издательство Penguin Books - на его счету 42 выпущенные книги.
### Определите автора с самой высокой средней оценкой книг — учитывайте только книги с 50 и более оценками;
```
task_4 = '''
SELECT authors.author AS author_name,
AVG(r.average_rating) as avg_rating
FROM books
INNER JOIN (SELECT book_id, COUNT(rating) AS cnt,
AVG(rating) AS average_rating
FROM ratings
GROUP BY book_id) AS r
ON r.book_id = books.book_id
INNER JOIN authors ON authors.author_id = books.author_id
WHERE r.cnt >= 50
GROUP BY author_name
ORDER BY avg_rating DESC
LIMIT 1
'''
auth_books_avg_rating = pd.io.sql.read_sql(task_4, con = engine)
print('Автор -', auth_books_avg_rating.iloc[0,0], 'с самой высокой средней оценкой книг:',
auth_books_avg_rating.iloc[0,1] )
```
Автор - J.K. Rowling/Mary GrandPré является авторос с самой высокой средней оценкой книг: 4.28.
### Посчитайте среднее количество обзоров от пользователей, которые поставили больше 50 оценок.
```
task_5 = '''
SELECT
AVG(sub.cnt_text)
FROM
(SELECT
reviews.username,
COUNT(DISTINCT reviews.text) AS cnt_text,
COUNT(DISTINCT ratings.book_id) AS cnt_book
FROM reviews
INNER JOIN ratings ON ratings.username = reviews.username
GROUP BY
reviews.username
HAVING
COUNT(DISTINCT ratings.book_id)>50) AS sub
'''
avg_cnt_reviews = pd.io.sql.read_sql(task_5, con = engine)
avg_cnt_reviews.head(10)
print('Среднее количество обзоров от пользователей, которые поставили больше 50 оценок равно:',
avg_cnt_reviews.iloc[0][0])
```
Среднее количество обзоров от пользователей, которые поставили больше 50 оценок равно: 24.
| github_jupyter |
<div align=right> <i>Team name: Abnormal distribution <br>
Team members: Rohit Khokle, Madhava Peddisetti, Pranatha Rao
# Analyzing sectors that affect the GDP and predicting the GDP for various countries
<img src= "images/gdp.jpg" width = 600 /> </center>
Forecasting macroeconomic variables such as economic growth measured in terms of GDP (Gross Domestic Product) is key to developing a view on a country’s economic outlook. Understanding the current and future state of the economy enables timely responses and policy measures to maintain economic and financial stability and boost resilience to episodes of crises and recessions.
## Importance of Predicting the GDP
1. Government officials and business managers use economic forecasts to determine fiscal and monetary policies and plan future operating activities, respectively.
2. Economic forecasting plays a key role in helping policymakers set spending and tax parameters.
## Objective
### 1. Analyze the impact of various sectors on GDP per capita over the years for various countries
### 2. Forecasting the GDP of the country using Linear Regression, Random Forest and Artifical Neural Network
## Datasets
Our choice of countries covers advanced economies (United States, United Kingdom and Germany), together
with a diverse set of emerging economies (India).
We used ***wbdata*** API to access the world bank data for our project.
```
import datetime
import wbdata
import pandas
import matplotlib.pyplot as plt
#Selecting country from world bank data
countries = ["DEU","USA","CHN","IND","ITA","GBR","FRA"]
#Gathering data of various sector for the country
indicators = {'NY.GDP.PCAP.CD':'GDP per capita (current US$)'}
#Indicating the time frame(years)
data_date = (datetime.datetime(2018, 1, 1), datetime.datetime(2018, 1, 1))
#Getting a dataframe with all the data
df = wbdata.get_dataframe(indicators, country=countries,data_date=data_date, convert_date=False)
#df is "pivoted", pandas' unstack fucntion helps reshape it into something plottable
dfu = df.unstack(level=0)
# a simple matplotlib plot with legend, labels and a title
dfu.plot(kind='bar', title ="% of GDP by components by sectors",figsize=(10,8),legend=False, fontsize=12)
#plt.legend(loc='best');
plt.title("GDP (current US$)");
plt.xlabel('Date'); plt.ylabel('GDP (current US$');
def load_from_wbdata(countries, indicators, year_from, year_to):
"""Create data frame for given list of countries, indicators and dates using World Bank API
:param countries: list of codes
:param indicators: dict {ind_code : ind_name}
:param year_from: starting year
:param year_to: ending year
:returns df_data: multi index data frame
"""
data_date = (datetime.datetime(year_from, 1, 1), datetime.datetime(year_to, 1, 1))
df_data = wbdata.get_dataframe(indicators, country=countries, data_date=data_date, convert_date=False)
return df_data
#set up the countries I want
countries = ["DEU","USA","CHN","IND","ITA","GBR","FRA"]
#Gathering data of various sector for the country
indicators1 = {
'BX.KLT.DINV.WD.GD.ZS': 'FDI net inflows',
'BM.KLT.DINV.WD.GD.ZS':'FDI net outflows',
'SH.XPD.GHED.GD.ZS': 'Domestic health expenditure',
'GC.XPN.TOTL.GD.ZS': 'Expense on Goods & Services',
'NE.CON.TOTL.ZS': 'Final consumption expenditure',
'NV.AGR.TOTL.ZS' : 'Agriculture,forestry,fishing',
'GB.XPD.RSDV.GD.ZS' : 'Research and development expenditure'
}
#get GDP PPP data (NY.GDP.PCAP.PP.KD - GDP per capita, PPP (constant 2011 international $))
gdp_ppp = load_from_wbdata(countries, indicators1, 2017, 2017)
import matplotlib.pyplot as plt
gdp_ppp.plot(kind='bar', title ="% of GDP by components by sectors",figsize=(15,10),legend=True, fontsize=12)
```
### Analyzing the impact of various sectors on GDP per capita of USA
<img src= "images/analysis.jpg" width = 600 /> </center>
#### Factors that affect the GDP
***Foreign direct investment inflow*** are the net inflows of investment to acquire a lasting management interest (10 percent or more of voting stock) in an enterprise operating in an economy other than that of the investor.
***Foreign direct investment outflow*** refers to direct investment equity flows in an economy. It is the sum of equity capital, reinvestment of earnings, and other capita.
***Domestic general government health expenditure (% of GDP)*** refers to public expenditure on health from domestic sources as a share of the economy as measured by GDP,World Health Organization Global Health Expenditure database (http://apps.who.int/nha/database).
***Expense on goods and services*** is cash payments for operating activities of the government in providing goods and services. It includes compensation of employees (such as wages and salaries), interest and subsidies, grants, social benefits, and other expenses such as rent and dividends.
***Final consumption expenditure*** (formerly total consumption) is the sum of household final consumption expenditure (private consumption) and general government final consumption expenditure (general government consumption).
***Agriculture*** corresponds to ISIC divisions 1-5 and includes forestry, hunting, and fishing, as well as cultivation of crops and livestock production. Value added is the net output of a sector after adding up all outputs and subtracting intermediate inputs
***Gross domestic expenditures on research and development (R&D),*** expressed as a percent of GDP. They include both capital and current expenditures in the four main sectors: Business enterprise, Government, Higher education and Private non-profit. R&D covers basic research, applied research, and experimental development.
```
import wbdata
import pandas as pd
import pycountry
import datetime
import matplotlib.pyplot as plt
#Selecting country from world bank data
countries = ["USA"]
#Gathering data of various sector for the country
indicators1 = {
'BX.KLT.DINV.WD.GD.ZS': 'FDI net inflows',
'BM.KLT.DINV.WD.GD.ZS':'FDI net outflows',
'SH.XPD.GHED.GD.ZS': 'Domestic health expenditure',
'GC.XPN.TOTL.GD.ZS': 'Expense on Goods & Services',
'NE.CON.TOTL.ZS': 'Final consumption expenditure',
'NV.AGR.TOTL.ZS' : 'Agriculture,forestry,fishing',
'GB.XPD.RSDV.GD.ZS' : 'Research and development expenditure'
}
#Indicating the time frame(years)
data_date = (datetime.datetime(1970, 1, 1), datetime.datetime(2015, 1, 1))
#Getting a dataframe with all the data
df1 = wbdata.get_dataframe(indicators1, country=countries, data_date=data_date ,convert_date=False)
df1
df1.reset_index(inplace = True)
```
#### Performing Exploratory Data Analysis on the data to elimate the NaN values
```
df1['Research and development expenditure']= df1['Research and development expenditure'].interpolate(method = 'polynomial', order = 2)
df1['Research and development expenditure'] = df1['Research and development expenditure'].interpolate(method = 'pad')
df1['Domestic health expenditure'] = df1['Domestic health expenditure'].interpolate(method = 'pad')
df1['Agriculture,forestry,fishing'] = df1['Agriculture,forestry,fishing'].interpolate(method = 'pad')
df1['Expense on Goods & Services'] = df1['Expense on Goods & Services'].interpolate(method = 'pad')
df1['Final consumption expenditure'] = df1['Final consumption expenditure'].interpolate(method = 'pad')
df1['FDI net inflows'] = df1['FDI net inflows'].interpolate(method = 'pad')
df1
```
#### Collect GDP after two years data from world bank and merge it with the above dataframe
```
#grab indicators above for countires above and load into data frame
data_date = (datetime.datetime(1972, 1, 1), datetime.datetime(2017, 1, 1))
indicators2 = {'NY.GDP.PCAP.CD': 'GDP per Capita after 2 years'}
#grab indicators above for countires above and load into data frame
df2 = wbdata.get_dataframe(indicators2, country=countries, data_date=data_date ,convert_date=False)
df2
df2.reset_index(inplace = True)
df2 = df2.drop(['date'],axis = 1)
df2
df_joined = pd.merge(df1,df2,left_index=True, right_index=True)
df_joined
```
### Normalizing the data in the column to bring all the columns to a common scale
We have used MinMaxScaler from sklearn for normalization
```
from sklearn import preprocessing
# Create x, where x the 'scores' column's values as floats
x = df_joined[['FDI net inflows','FDI net outflows','Domestic health expenditure','Expense on Goods & Services','Final consumption expenditure','Agriculture,forestry,fishing','Research and development expenditure','GDP per Capita after 2 years']].values.astype(float)
# Create a minimum and maximum processor object
min_max_scaler = preprocessing.MinMaxScaler()
# Create an object to transform the data to fit minmax processor
x_scaled = min_max_scaler.fit_transform(x)
# Run the normalizer on the dataframe
df_joined[['FDI net inflows','FDI net outflows','Domestic health expenditure','Expense on Goods & Services','Final consumption expenditure','Agriculture,forestry,fishing','Research and development expenditure','GDP per Capita after 2 years']] = pd.DataFrame(x_scaled)
```
### Finding the correlation between the data
```
df_joined.corr()
```
### Heat map of the correlation
```
import seaborn as sns
#plotting the heat map of the correlation
plt.figure(figsize=(20,7))
sns.heatmap(df_joined.corr(), annot=True)
df_joined
```
### Using OLS for finding the p value and t statistics
```
import statsmodels.api as sm
import statsmodels.formula.api as smf
model = sm.OLS(df_joined['GDP per Capita after 2 years'], df_joined[['FDI net inflows','FDI net outflows','Domestic health expenditure','Expense on Goods & Services', 'Final consumption expenditure','Agriculture,forestry,fishing','Research and development expenditure']]).fit()
# Print out the statistic
model.summary()
```
#### For any modelling task, the hypothesis is that there is some correlation between the features and the target.
#### The null hypothesis : there is no correlation between the features and the target.
Considering the significance value of 0.05.
1. The **FDI net inflows** has the p-value 0.000, which is less and this provides greater evidence against the null hypothesis and it is a significant feature.
2. The **FDI net outflows** has the p-value 0.963, which is greater and this provides less evidence against the null hypothesis and it is not a significant feature.
2. The **Domestic health expenditure** has the p-value 0.000, which is less and this provides greater evidence against the null hypothesis and it is a significant feature.
3. The **Expense on Goods & Services** has the p-value 0.807, which is greater and this provides less evidence against the null hypothesis and it is not a significant feature.
4. The **Final consumption expenditure** has the p-value 0.000, which is less and this provides greater evidence against the null hypothesis and it is a significant feature.
5. The **Agriculture,forestry,fishing** has the p-value 0.877,which is greater and this provides less evidence against the null hypothesis and it is not a significant feature.
6. The **Research and development expenditure** has the p-value 0.020, which is less and this provides greater evidence against the null hypothesis and it is a significant feature.
***We started with 7 variables***
1. FDI net inflows
2. FDI net outflows
3. Domestic health expenditure
4. Expense on Goods and Services
5. Final consumption expenditure
6. Agriculture, forestry, fishing
7. Research and development expenditure
***Out of which we developed model on 4 variables***
***FDI net inflows***
***Domestic health expenditure***
***Final consumption expenditure***
***Research and development expenditure***
From the model's t statistic we could make out that ***FDI net inflows*** is the an important parameter determining GDP Per Capita. As net FDI inflow increases the GDP per capita of the country increases.
The second important variable determining GDP per capita is ***Domestic health expenditure***. If Domestic health expenditure increases the GDP per Capita of the country increases.
Third important factor contributing to GDP per capita is ***Final consumption expenditure***. Consumer spending is the biggest component of the economy, accounting for more than two-thirds of the U.S. economy
Lastly, ***Research and development expenditure*** plays a major role in increasing GDP of the country
## Predicting the GDP per capita based on the above sectors
<img src= "images/predict.jpg" width = 600 /> </center>
### 1. Modelling the data using Linear regression
### Train, test and validation split
Data is split into 2 parts
Training data set = 80%
Test data set = 20%
```
from sklearn.model_selection import train_test_split
X = df_joined[['FDI net inflows','Domestic health expenditure','Final consumption expenditure','Research and development expenditure']]
y = df_joined['GDP per Capita after 2 years']
X_t, X_test, y_t, y_test = train_test_split(X, y, test_size=0.20, random_state=1)
#X_train, X_val, y_train, y_val = train_test_split(X_t, y_t, test_size=0.15, random_state=1)
```
### Linear Regression
```
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_t,y_t)
# Make predictions using the testing set
y_pred = regr.predict(X_t)
#training Data
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print('Mean squared error: %.2f'% mean_squared_error(y_t, y_pred))
# The coefficient of determination: 1 is perfect prediction
print('Coefficient of determination: %.2f'% r2_score(y_t, y_pred))
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print('Mean squared error: %.2f'% mean_squared_error(y_test, y_pred))
# The coefficient of determination: 1 is perfect prediction
print('Coefficient of determination: %.2f'% r2_score(y_test, y_pred))
y_pred
# Get R2 measure (indicator of accuracy 1 is perfect 0 is horrible)
#regr.score(X_test, y_test)
train_score=regr.score(X_t, y_t)
test_score=regr.score(X_test, y_test)
print("linear regression train score:", train_score)
print("linear regression test score:", test_score)
## The line / model
import matplotlib.pyplot as plt
import seaborn as sns
sns.regplot(y_test,y_pred)
plt.xlabel('Validation data')
plt.ylabel('Predictions')
```
### 2. Modelling the data using Random forest
<img src= "images/random forest.jpg" width = 600 /> </center>
#### Train,test and validation split
Data is split into 3 parts
Training data set = 80.75%
Validation data set = 14.25%
Test data set = 5%
```
from sklearn.model_selection import train_test_split
X = df_joined[['FDI net inflows','FDI net outflows','Domestic health expenditure','Final consumption expenditure','Agriculture,forestry,fishing','Research and development expenditure']]
y = df_joined['GDP per Capita after 2 years']
X_t, X_test, y_t, y_test = train_test_split(X, y, test_size=0.05, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_t, y_t, test_size=0.15, random_state=1)
from sklearn.ensemble import RandomForestRegressor
import numpy as np
random_model = RandomForestRegressor(n_estimators =100,
min_samples_split = 10,
min_samples_leaf = 15,
max_features= 'auto',
max_depth = 20,
bootstrap = True)
random_model.fit(X_train, y_train)
print('Training score',r2_score(y_train, random_model.predict(X_train)))
print('R2 score for training data' ,r2_score(y_train, random_model.predict(X_train)))
print('R2 score for test data',r2_score(y_test, random_model.predict(X_test)))
print('Root mean square error score on training set',np.sqrt(mean_squared_error(y_train,random_model.predict(X_train))))
print('Root mean square error score on test set',np.sqrt(mean_squared_error(y_test,random_model.predict(X_test))))
```
### The training and test score is quite bad without tuning the hyperparameters
### So we use Grid search for the tunning the hyper parameter
Which hyperparameters are important?
1. n_estimators = number of trees in the foreset
2. max_features = max number of features considered for splitting a node
3. max_depth = max number of levels in each decision tree
4. min_samples_split = min number of data points placed in a node before the node is split
5. min_samples_leaf = min number of data points allowed in a leaf node
6. bootstrap = method for sampling data points (with or without replacement)
Used the Grid Search to tune the **Hyper Parameter** and found the best possible hyperparameter for the Random Hyperparameter Grid.
After hyperparameter tunning, the model did better performance.
```
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
{'bootstrap': [True, False],
'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]}
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestRegressor()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100, cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Fit the random search model
rf_random.fit(X_train, y_train)
rf_random.best_params_
from sklearn.ensemble import RandomForestRegressor
random_model = RandomForestRegressor(n_estimators =1000,
min_samples_split = 2,
min_samples_leaf = 1,
max_features= 'sqrt',
max_depth = 110,
bootstrap = True)
random_model.fit(X_train, y_train)
print('Training score is',r2_score(y_train, random_model.predict(X_train)))
print('Testing score is ',r2_score(y_test, random_model.predict(X_test)))
rmse = np.sqrt(mean_squared_error(y_test,random_model.predict(X_test)))
print('Root mean square error is',rmse)
y_pred = random_model.predict(X_test)
y_pred
```
### 3. Artifical Neural Network
<img src= "images/ann.png" width = 600 /> </center>
#### Train test split
```
#Train test split
train_dataset = df_joined.sample(frac=0.8,random_state=0)
test_dataset = df_joined.drop(train_dataset.index)
```
#### Performing Exploratory Data Analysis
```
train_dataset = train_dataset.drop('Country code',axis =1)
train_dataset = train_dataset.drop('date',axis =1)
test_dataset = test_dataset.drop('Country code',axis =1)
test_dataset = test_dataset.drop('date',axis =1)
#Ploting pairwise relationships in the dataset
sns.pairplot(train_dataset, diag_kind="kde")
#computing a summary of statistics
train_stats = train_dataset.describe()
#Droping the target column
train_stats.pop("GDP per Capita after 2 years")
#Transpose of the dataset
train_stats = train_stats.transpose()
train_stats
#Droping the target lable in the train and test dataset
train_labels = train_dataset.pop('GDP per Capita after 2 years')
test_labels = test_dataset.pop('GDP per Capita after 2 years')
#Function for normalising the dataset
def norm(a):
return (a - train_stats['mean']) / train_stats['std']
#Normalization of train and test dataset
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
#Building the model with one input, hidden and output layer.
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
#using optimizer perform backpropagation and 0.001 is the learning rate
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse']) #update weights
return model
model = build_model()
model.summary()
example_batch = normed_train_data[:9]
example_result = model.predict(example_batch)
example_result
from keras.callbacks import EarlyStopping, ModelCheckpoint
#Earlystopping to avoid overfitting
earlystopper = EarlyStopping(patience=3, verbose=1)
filepath = "model.h5"
#save the model in model.h5
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1,
save_best_only=True, mode='min')
callbacks_list = [earlystopper, checkpoint]
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=1,
callbacks=callbacks_list)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
early_history = model.fit(normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[early_stop])
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f}".format(mae))
print("Accuracy is", (1 - loss))
test_predictions = model.predict(normed_test_data).flatten()
a = plt.axes(aspect='equal')
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values')
plt.ylabel('Predictions')
lims = [0, 3]
plt.xlim(lims)
plt.ylim(lims)
#_ = plt.plot(lims, lims)
_ = plt.plot([-100, 100], [-100, 100])
test_predictions
```
### ***Test scores of all the three models for US:***
```
print("linear regression test score:", test_score)
print('Random forest testing score is ',r2_score(y_test, random_model.predict(X_test)))
print("ANN testing score is", (1 - loss))
```
## Summary of all the countries accuracy
<img src= "images/test scores.png" width = 600 /> </center>
# p-value table for all the countries
<br> <center> <b>Canada</b> <br> <img src= "pictures/canada.png" width = 600 /> </center> <br>
<center> <b>China</b> <br> <img src= "pictures/china.png" width = 600 /> </center>
<center> <b>France</b> <br> <img src= "pictures/france.png" width = 600 /> </center> <br>
<center> <b>Germany</b> <br><img src= "pictures/germany.png" width = 600 /> </center> <br>
<center> <b>India</b> <br><img src= "pictures/india.png" width = 600 /> </center> <br>
<center> <b>Italy</b> <br><img src= "pictures/italy.png" width = 600 /> </center> <br>
<center> <b>Japan</b> <br><img src= "pictures/japan.png" width = 600 /> </center> <br>
<center> <b>Russia</b> <br><img src= "pictures/russia.png" width = 600 /> </center> <br>
<center> <b>United Kingdom</b> <br><img src= "pictures/uk.png" width = 600 /> </center> <br>
### <font size = 6><b>Conclusion<b>
<font size = 4> Out of the factors that we considered in our analysis for various factors we found "Gross domestic expenditures on research and development (R&D) expenditure", "Domestic general government health expenditure", "Foreign direct investment inflow","FDI net inflows" to have a major impact in affecting GDP outcome of the respective countries.
Using the prediction models such as Random Forest,ANN on these factors help us to determine GDP two years ahead. This could help policy makers to make timely economic decisions.
| github_jupyter |
```
from __future__ import print_function, division, absolute_import
```
# Challenges of Streaming Data:
Building an ANTARES-like Pipeline for Data Management and Discovery
========
#### Version 0.1
***
By AA Miller 2017 Apr 10
Edited by Gautham Narayan, 2017 Apr 26
As we just saw in Gautham's lecture - LSST will produce an unprecedented volume of time-domain information for the astronomical sky. $>37$ trillion individual photometric measurements will be recorded. While the vast, vast majority of these measurements will simply confirm the status quo, some will represent rarities that have never been seen before (e.g., LSST may be the first telescope to discover the electromagnetic counterpart to a LIGO graviational wave event), which the community will need to know about in ~real time.
Storing, filtering, and serving this data is going to be a huge <del>nightmare</del> challenge. ANTARES, as detailed by Gautham, is one proposed solution to this challenge. In this exercise you will build a miniature version of ANTARES, which will require the application of several of the lessons from earlier this week. Many of the difficult, and essential, steps necessary for ANTARES will be skipped here as they are too time consuming or beyond the scope of what we have previously covered. We will point out these challenges are we come across them.
```
import numpy as np
import scipy.stats as spstat
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib notebook
```
## Problem 1) Light Curve Data
We begin by ignoring the streaming aspect of the problem (we will come back to that later) and instead we will work with full light curves. The collection of light curves has been curated by Gautham and like LSST it features objects of different types covering a large range in brightness and observations in multiple filters taken at different cadences.
As the focus of this exercise is the construction of a data management pipeline, we have already created a Python `class` to read in the data and store light curves as objects. The data are stored in flat text files with the following format:
|t |pb |flux |dflux |
|:--------------:|:---:|:----------:|-----------:|
| 56254.160000 | i | 6.530000 | 4.920000 |
| 56254.172000 | z | 4.113000 | 4.018000 |
| 56258.125000 | g | 5.077000 | 10.620000 |
| 56258.141000 | r | 6.963000 | 5.060000 |
| . | . | . | . |
| . | . | . | . |
| . | . | . | . |
and names `FAKE0XX.dat` where the `XX` is a running index from `01` to `99`.
**Problem 1a**
Read in the data for the first light curve file and plot the $g'$ light curve for that source.
```
# execute this cell
# XXX note - figure out how data handling will work for this file
lc = pd.read_csv('training_set_for_LSST_DSFP/FAKE001.dat', delim_whitespace=True, comment = '#')
plt.errorbar(np.array(lc['t'].ix[lc['pb'] == 'g']),
np.array(lc['flux'].ix[lc['pb'] == 'g']),
np.array(lc['dflux'].ix[lc['pb'] == 'g']), fmt = 'o', color = 'green')
plt.xlabel('MJD')
plt.ylabel('flux')
```
As we have many light curve files (in principle as many as 37 billion...), we will define a light curve class to ease our handling of the data.
** Problem 1b**
Fix the `lc` class definition below.
*Hint* - the only purpose of this problem is to make sure you actually read each line of code below, it is not intended to be difficult.
```
class ANTARESlc():
'''Light curve object for NOAO formatted data'''
def __init__(self, filename):
'''Read in light curve data'''
DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
self.DFlc = DFlc
self.filename = filename
def plot_multicolor_lc(self):
'''Plot the 4 band light curve'''
fig, ax = plt.subplots()
g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
fmt = 'o', color = '#78A5A3', label = r"$g'$")
r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
fmt = 'o', color = '#CE5A57', label = r"$r'$")
i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
fmt = 'o', color = '#E1B16A', label = r"$i'$")
z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
fmt = 'o', color = '#444C5C', label = r"$z'$")
ax.legend(fancybox = True)
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
```
**Problem 1c**
Confirm the corrections made in **1b** by plotting the multiband light curve for the source `FAKE010`.
```
lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')
lc.plot_multicolor_lc()
```
One thing that we brushed over previously is that the brightness measurements have units of flux, rather than the traditional use of magnitudes. The reason for this is that LSST will measure flux variations via image differencing, which will for some sources in some filters result in a measurement of *negative flux*. (You may have already noticed this in **1a**.) Statistically there is nothing wrong with such a measurement, but it is impossible to convert a negative flux into a magnitude. Thus we will use flux measurements throughout this exercise. [Aside - if you are bored during the next break, I'd be happy to rant about why we should have ditched the magnitude system years ago.]
Using flux measurements will allow us to make unbiased measurements of the statistical distributions of the variations of the sources we care about.
**Problem 1d**
What is `FAKE010` the source that is plotted above?
*Hint 1* - if you have no idea that's fine, move on.
*Hint 2* - ask Szymon or Tomas...
**Solution 1d**
`FAKE010` is a transient, as can be seen by the rapid rise followed by a gradual decline in the light curve. In this particular case, we can further guess that `FAKE010` is a Type Ia supernova due to the secondary maxima in the $i'$ and $z'$ light curves. These secondary peaks are not present in any other known type of transient.
**Problem 1e**
To get a better sense of the data, plot the multiband light curves for sources `FAKE060` and `FAKE073`.
```
lc59 = ANTARESlc("training_set_for_LSST_DSFP/FAKE060.dat")
lc59.plot_multicolor_lc()
lc60 = ANTARESlc("training_set_for_LSST_DSFP/FAKE073.dat")
lc60.plot_multicolor_lc()
```
## Problem 2) Data Preparation
While we could create a database table that includes every single photometric measurement made by LSST, this ~37 trillion row db would be enormous without providing a lot of added value beyond the raw flux measurements [while this table is necessary, alternative tables may provide more useful]. Furthermore, extracting individual light curves from such a database will be slow. Instead, we are going to develop summary statistics for every source which will make it easier to select individual sources and develop classifiers to identify objects of interest.
Below we will redefine the `ANTARESlc` class to include additional methods so we can (eventually) store summary statistics in a database table. In the interest of time, we limit the summary statistics to a relatively small list all of which have been shown to be useful for classification (see [Richards et al. 2011](http://iopscience.iop.org/article/10.1088/0004-637X/733/1/10/meta) for further details). The statistics that we include (for now) are:
1. `Std` -- the standard deviation of the flux measurements
2. `Amp` -- the amplitude of flux deviations
3. `MAD` -- the median absolute deviation of the flux measurements
4. `beyond1std` -- the fraction of flux measurements beyond 1 standard deviation
5. the mean $g' - r'$, $r' - i'$, and $i' - z'$ color
**Problem 2a**
Complete the mean color module in the `ANTARESlc` class. Feel free to use the other modules as a template for your work.
*Hint*/*food for thought* - if a source is observed in different filters but the observations are not simultaneous (or quasi-simultaneous), what is the meaning of a "mean color"?
*Solution to food for thought* - in this case we simply want you to take the mean flux in each filter and create a statistic that is $-2.5 \log \frac{\langle f_X \rangle}{\langle f_{Y} \rangle}$, where ${\langle f_{Y} \rangle}$ is the mean flux in band $Y$, while $\langle f_X \rangle$ is the mean flux in band $X$, which can be $g', r', i', z'$. Note that our use of image-difference flux measurements, which can be negative, means you'll need to add some form a case excpetion if $\langle f_X \rangle$ or $\langle f_Y \rangle$ is negative. In these cases set the color to -999.
```
class ANTARESlc():
'''Light curve object for NOAO formatted data'''
def __init__(self, filename):
'''Read in light curve data'''
DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
self.DFlc = DFlc
self.filename = filename
def plot_multicolor_lc(self):
'''Plot the 4 band light curve'''
fig, ax = plt.subplots()
g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
fmt = 'o', color = '#78A5A3', label = r"$g'$")
r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
fmt = 'o', color = '#CE5A57', label = r"$r'$")
i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
fmt = 'o', color = '#E1B16A', label = r"$i'$")
z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
fmt = 'o', color = '#444C5C', label = r"$z'$")
ax.legend(fancybox = True)
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
def filter_flux(self):
'''Store individual passband fluxes as object attributes'''
self.gFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'g']
self.gFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g']
self.rFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'r']
self.rFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r']
self.iFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'i']
self.iFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i']
self.zFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'z']
self.zFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z']
def weighted_mean_flux(self):
'''Measure (SNR weighted) mean flux in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
weighted_mean = lambda flux, dflux: np.sum(flux*(flux/dflux)**2)/np.sum((flux/dflux)**2)
self.gMean = weighted_mean(self.gFlux, self.gFluxUnc)
self.rMean = weighted_mean(self.rFlux, self.rFluxUnc)
self.iMean = weighted_mean(self.iFlux, self.iFluxUnc)
self.zMean = weighted_mean(self.zFlux, self.zFluxUnc)
def normalized_flux_std(self):
'''Measure standard deviation of flux in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_flux_std = lambda flux, wMeanFlux: np.std(flux/wMeanFlux, ddof = 1)
self.gStd = normalized_flux_std(self.gFlux, self.gMean)
self.rStd = normalized_flux_std(self.rFlux, self.rMean)
self.iStd = normalized_flux_std(self.iFlux, self.iMean)
self.zStd = normalized_flux_std(self.zFlux, self.zMean)
def normalized_amplitude(self):
'''Measure the normalized amplitude of variations in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_amplitude = lambda flux, wMeanFlux: (np.max(flux) - np.min(flux))/wMeanFlux
self.gAmp = normalized_amplitude(self.gFlux, self.gMean)
self.rAmp = normalized_amplitude(self.rFlux, self.rMean)
self.iAmp = normalized_amplitude(self.iFlux, self.iMean)
self.zAmp = normalized_amplitude(self.zFlux, self.zMean)
def normalized_MAD(self):
'''Measure normalized Median Absolute Deviation (MAD) in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_MAD = lambda flux, wMeanFlux: np.median(np.abs((flux - np.median(flux))/wMeanFlux))
self.gMAD = normalized_MAD(self.gFlux, self.gMean)
self.rMAD = normalized_MAD(self.rFlux, self.rMean)
self.iMAD = normalized_MAD(self.iFlux, self.iMean)
self.zMAD = normalized_MAD(self.zFlux, self.zMean)
def normalized_beyond_1std(self):
'''Measure fraction of flux measurements beyond 1 std'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
beyond_1std = lambda flux, wMeanFlux: sum(np.abs(flux - wMeanFlux) > np.std(flux, ddof = 1))/len(flux)
self.gBeyond = beyond_1std(self.gFlux, self.gMean)
self.rBeyond = beyond_1std(self.rFlux, self.rMean)
self.iBeyond = beyond_1std(self.iFlux, self.iMean)
self.zBeyond = beyond_1std(self.zFlux, self.zMean)
def skew(self):
'''Measure the skew of the flux measurements'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
skew = lambda flux: spstat.skew(flux)
self.gSkew = skew(self.gFlux)
self.rSkew = skew(self.rFlux)
self.iSkew = skew(self.iFlux)
self.zSkew = skew(self.zFlux)
def mean_colors(self):
'''Measure the mean g-r, g-i, and g-z colors'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
self.gMinusR = -2.5*np.log10(self.gMean/self.rMean) if self.gMean> 0 and self.rMean > 0 else -999
self.rMinusI = -2.5*np.log10(self.rMean/self.iMean) if self.rMean> 0 and self.iMean > 0 else -999
self.iMinusZ = -2.5*np.log10(self.iMean/self.zMean) if self.iMean> 0 and self.zMean > 0 else -999
```
**Problem 2b**
Confirm your solution to **2a** by measuring the mean colors of source `FAKE010`. Does your measurement make sense given the plot you made in **1c**?
```
lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')
lc.filter_flux()
lc.weighted_mean_flux()
lc.mean_colors()
print("The g'-r', r'-i', and 'i-z' colors are: {:.3f}, {:.3f}, and {:.3f}, respectively.". format(lc.gMinusR, lc.rMinusI, lc.iMinusZ))
```
## Problem 3) Store the sources in a database
Building (and managing) a database from scratch is a challenging task. For (very) small projects one solution to this problem is to use [`SQLite`](http://sqlite.org/), which is a self-contained, publicly available SQL engine. One of the primary advantages of `SQLite` is that no server setup is required, unlike other popular tools such as postgres and MySQL. In fact, `SQLite` is already integrated with python so everything we want to do (create database, add tables, load data, write queries, etc.) can be done within Python.
Without diving too deep into the details, here are situations where `SQLite` has advantages and disadvantages [according to their own documentation](http://sqlite.org/whentouse.html):
*Advantages*
1. Situations where expert human support is not needed
2. For basic data analysis (`SQLite` is easy to install and manage for new projects)
3. Education and training
*Disadvantages*
1. Client/Server applications (`SQLite` does not behave well if multiple systems need to access db at the same time)
2. Very large data sets (`SQLite` stores entire db in a single disk file, other solutions can store data across multiple files/volumes)
3. High concurrency (Only 1 writer allowed at a time for `SQLite`)
From the (limited) lists above, you can see that while `SQLite` is perfect for our application right now, if you were building an actual ANTARES-like system a more sophisticated database solution would be required.
**Problem 3a**
Import sqlite3 into the notebook.
*Hint* - if this doesn't work, you may need to `conda install sqlite3` or `pip install sqlite3`.
```
import sqlite3
```
Following the `sqlite3` import, we must first connect to the database. If we attempt a connection to a database that does not exist, then a new database is created. Here we will create a new database file, called `miniANTARES.db`.
```
conn = sqlite3.connect("miniANTARES.db")
```
We now have a database connection object, `conn`. To interact with the database (create tables, load data, write queries) we need a cursor object.
```
cur = conn.cursor()
```
Now that we have a cursor object, we can populate the database. As an example we will start by creating a table to hold all the raw photometry (though ultimately we will not use this table for analysis).
*Note* - there are many cursor methods capable of interacting with the database. The most common, [`execute`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute), takes a single `SQL` command as its argument and executes that command. Other useful methods include [`executemany`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany), which is useful for inserting data into the database, and [`executescript`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript), which take an `SQL` script as its argument and executes the script.
In many cases, as below, it will be useful to use triple quotes in order to improve the legibility of your code.
```
cur.execute("""drop table if exists rawPhot""") # drop the table if is already exists
cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
```
Let's unpack everything that happened in these two commands. First - if the table `rawPhot` already exists, we drop it to start over from scratch. (this is useful here, but should not be adopted as general practice)
Second - we create the new table `rawPhot`, which has 6 columns: `id` - a running index for every row in the table, `objId` - an ID to identify which source the row belongs to, `t` - the time of observation in MJD, `pb` - the passband of the observation, `flux` the observation flux, and `dflux` the uncertainty on the flux measurement. In addition to naming the columns, we also must declare their type. We have declared `id` as the primary key, which means this value will automatically be assigned and incremented for all data inserted into the database. We have also declared `pb` as a variable character of length 1, which is more useful and restrictive than simply declaring `pb` as `text`, which allows any freeform string.
Now we need to insert the raw flux measurements into the database. To do so, we will use the `ANTARESlc` class that we created earlier. As an initial example, we will insert the first 3 observations from the source `FAKE010`.
```
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[0])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[1])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[2])))
```
There are two things to highlight above: (1) we do not specify an id for the data as this is automatically generated, and (2) the data insertion happens via a tuple. In this case, we are taking advantage of the fact that a Python tuple is can be concatenated:
(objId,) + tuple(lc10.DFlc.ix[0]))
While the above example demonstrates the insertion of a single row to the database, it is far more efficient to bulk load the data. To do so we will delete, i.e. `DROP`, the rawPhot table and use some `pandas` manipulation to load the contents of an entire file at once via [`executemany`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany).
```
cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
# next 3 lines are already in name space; repeated for clarity
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
```
**Problem 3b**
Load all of the raw photometric observations into the `rawPhot` table in the database.
*Hint* - you can use [`glob`](https://docs.python.org/3/library/glob.html) to select all of the files being loaded.
*Hint 2* - you have already loaded the data from `FAKE001` into the table.
```
import glob
filenames = glob.glob("training_set_for_LSST_DSFP/FAKE*.dat")
for filename in filenames[1:]:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
```
**Problem 3c**
To ensure the data have been loaded properly, select the $r'$ light curve for source `FAKE010` from the `rawPhot` table and plot the results. Does it match the plot from **1c**?
```
cur.execute("""select t, flux, dflux
from rawPhot
where objId = 61 and pb = 'g'""")
data = cur.fetchall()
data = np.array(data)
fig, ax = plt.subplots()
ax.errorbar(data[:,0], data[:,1], data[:,2], fmt = 'o', color = '#78A5A3')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
```
Now that we have loaded the raw observations, we need to create a new table to store summary statistics for each object. This table will include everything we've added to the `ANTARESlc` class.
```
cur.execute("""drop table if exists lcFeats""") # drop the table if it already exists
cur.execute("""create table lcFeats(
id integer primary key,
objId int,
gStd float,
rStd float,
iStd float,
zStd float,
gAmp float,
rAmp float,
iAmp float,
zAmp float,
gMAD float,
rMAD float,
iMAD float,
zMAD float,
gBeyond float,
rBeyond float,
iBeyond float,
zBeyond float,
gSkew float,
rSkew float,
iSkew float,
zSkew float,
gMinusR float,
rMinusI float,
iMinusZ float,
FOREIGN KEY(objId) REFERENCES rawPhot(objId)
)
""")
```
The above procedure should look familiar to above, with one exception: the addition of the `foreign key` in the `lcFeats` table. The inclusion of the `foreign key` ensures a connected relationship between `rawPhot` and `lcFeats`. In brief, a row cannot be inserted into `lcFeats` unless a corresponding row, i.e. `objId`, exists in `rawPhot`. Additionally, rows in `rawPhot` cannot be deleted if there are dependent rows in `lcFeats`.
**Problem 3d**
Calculate features for every source in `rawPhot` and insert those features into the `lcFeats` table.
```
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
lc.filter_flux()
lc.weighted_mean_flux()
lc.normalized_flux_std()
lc.normalized_amplitude()
lc.normalized_MAD()
lc.normalized_beyond_1std()
lc.skew()
lc.mean_colors()
feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd,
lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp,
lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD,
lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,
lc.gMinusR, lc.rMinusI, lc.iMinusZ)
cur.execute("""insert into lcFeats(objId,
gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ) values {}""".format(feats))
```
**Problem 3e**
Confirm that the data loaded correctly by counting the number of sources with `gAmp` > 2.
How many sources have `gMinusR` = -999?
*Hint* - you should find 9 and 2, respectively.
```
cur.execute("""select count(*) from lcFeats where gAmp > 2""")
nAmp2 = cur.fetchone()[0]
cur.execute("""select count(*) from lcFeats where gMinusR = -999""")
nNoColor = cur.fetchone()[0]
print("There are {:d} sources with gAmp > 2".format(nAmp2))
print("There are {:d} sources with no measured i' - z' color".format(nNoColor))
```
Finally, we close by commiting the changes we made to the database.
Note that strictly speaking this is not needed, however, were we to update any values in the database then we would need to commit those changes.
```
conn.commit()
```
**mini Challenge Problem**
If there is less than 45 min to go, please skip this part.
Earlier it was claimed that bulk loading the data is faster than loading it line by line. For this problem - prove this assertion, use `%%timeit` to "profile" the two different options (bulk load with `executemany` and loading one photometric measurement at a time via for loop).
*Hint* - to avoid corruption of your current working database, `miniANTARES.db`, create a new temporary database for the pupose of running this test. Also be careful with the names of your connection and cursor variables.
```
%%timeit
# bulk load solution
tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()
tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
tmp_cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
%%timeit
# bulk load solution
tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()
tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
for obs in lc.DFlc.values:
tmp_cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(obs)))
```
## Problem 4) Build a Classification Model
One of the primary goals for ANTARES is to separate the Wheat from the Chaff, in other words, given that ~10 million alerts will be issued by LSST on a nightly basis, what is the single (or 10, or 100) most interesting alert.
Here we will build on the skills developed during the DSFP Session 2 to construct a machine-learning model to classify new light curves.
Fortunately - the data that has already been loaded to miniANTARES.db is a suitable training set for the classifier (we simply haven't provided you with labels just yet). Execute the cell below to add a new table to the database which includes the appropriate labels.
```
cur.execute("""drop table if exists lcLabels""") # drop the table if it already exists
cur.execute("""create table lcLabels(
objId int,
label int,
foreign key(objId) references rawPhot(objId)
)""")
labels = np.zeros(100)
labels[20:60] = 1
labels[60:] = 2
data = np.append(np.arange(1,101)[np.newaxis].T, labels[np.newaxis].T, axis = 1)
tup_data = [tuple(x) for x in data]
cur.executemany("""insert into lcLabels(objId, label) values (?,?)""", tup_data)
```
For now - don't worry about what the labels mean (though if you inspect the light curves you may be able to figure this out...)
**Problem 4a**
Query the database to select features and labels for the light curves in your training set. Store the results of these queries in `numpy` arrays, `X` and `y`, respectively, which are suitable for the various `scikit-learn` machine learning algorithms.
*Hint* - recall that databases do not store ordered results.
*Hint 2* - recall that `scikit-learn` expects `y` to be a 1d array. You will likely need to convert a 2d array to 1d.
```
cur.execute("""select label
from lcLabels
order by objId asc""")
y = np.array(cur.fetchall()).ravel()
cur.execute("""select gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ
from lcFeats
order by objId asc""")
X = np.array(cur.fetchall())
```
**Problem 4b**
Train a SVM model ([`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) in `scikit-learn`) using a radial basis function (RBF) kernel with penalty parameter, $C = 1$, and kernel coefficient, $\gamma = 0.1$.
Evaluate the accuracy of the model via $k = 5$ fold cross validation.
*Hint* - you may find the [`cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score) module helpful.
```
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
cv_scores = cross_val_score(SVC(C = 1.0, gamma = 0.1, kernel = 'rbf'), X, y, cv = 5)
print("The SVM model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))
```
The SVM model does a decent job of classifying the data. However - we are going to have 10 million alerts every night. Therefore, we need something that runs quickly. For most ML models the training step is slow, while predictions (relatively) are fast.
**Problem 4c**
Pick any other [classification model from `scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html), and "profile" the time it takes to train that model vs. the time it takes to train an SVM model.
Is the model that you have selected faster than SVM?
*Hint* - you should import the model outside your timing loop as we only care about the training step in this case.
```
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier()
svm_clf = SVC(C = 1.0, gamma = 0.1, kernel = 'rbf')
%%timeit
# timing solution for RF model
rf_clf.fit(X,y)
%%timeit
# timing solution for SVM model
svm_clf.fit(X,y)
```
**Problem 4d**
Does the model you selected perform better than the SVM model? Perform a $k = 5$ fold cross validation to determine which model provides superior accuracy.
```
cv_scores = cross_val_score(RandomForestClassifier(), X, y, cv = 5)
print("The RF model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))
```
**Problem 4e**
Which model are you going to use in your miniANTARES? Justify your answer.
*Write solution to **4e** here*
In this case we are going to adopt the SVM model as it is a factor of 20 times faster than RF, while providing nearly identical performance from an accuracy stand point.
## Problem 5) Class Predictions for New Sources
Now that we have developed a basic infrastructure for dealing with streaming data, we may reap the rewards of our efforts. We will use our ANTARES-like software to classify newly observed sources.
**Problem 5a**
Load the light curves for the new observations (found in `full_testset_for_LSST_DSP`) into the a table in the database.
*Hint* - ultimately it doesn't matter much one way or another, but you may choose to keep new observations in a table separate from the training data. I'm putting it into a new `testPhot` database. Up to you.
```
cur.execute("""drop table if exists testPhot""") # drop the table if is already exists
cur.execute("""create table testPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
cur.execute("""drop table if exists testFeats""") # drop the table if it already exists
cur.execute("""create table testFeats(
id integer primary key,
objId int,
gStd float,
rStd float,
iStd float,
zStd float,
gAmp float,
rAmp float,
iAmp float,
zAmp float,
gMAD float,
rMAD float,
iMAD float,
zMAD float,
gBeyond float,
rBeyond float,
iBeyond float,
zBeyond float,
gSkew float,
rSkew float,
iSkew float,
zSkew float,
gMinusR float,
rMinusI float,
iMinusZ float,
FOREIGN KEY(objId) REFERENCES testPhot(objId)
)
""")
new_obs_filenames = glob.glob("test_set_for_LSST_DSFP/FAKE*.dat")
for filename in new_obs_filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into testPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
```
**Problem 5b**
Calculate features for the new observations and insert those features into a database table.
*Hint* - again, you may want to create a new table for this, up to you. I'm using the `testFeats` table.
```
for filename in new_obs_filenames:
lc = ANTARESlc(filename)
# simple HACK to get rid of data with too few observations (fails because std is nan with just one observation)
if len(lc.DFlc) <= 14:
continue
objId = int(filename.split('FAKE')[1].split(".dat")[0])
lc.filter_flux()
lc.weighted_mean_flux()
lc.normalized_flux_std()
lc.normalized_amplitude()
lc.normalized_MAD()
lc.normalized_beyond_1std()
lc.skew()
lc.mean_colors()
feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd,
lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp,
lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD,
lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,
lc.gMinusR, lc.rMinusI, lc.iMinusZ)
cur.execute("""insert into testFeats(objId,
gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ) values {}""".format(feats))
```
**Problem 5c**
Train the model that you adopted in **4e** on the training set, and produce predictions for the newly observed sources.
What is the class distribution for the newly detected sources?
*Hint* - the training set was constructed to have a nearly uniform class distribution, that may not be the case for the actual observed distribution of sources.
```
svm_clf = SVC(C=1.0, gamma = 0.1, kernel = 'rbf').fit(X, y)
cur.execute("""select gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ
from testFeats
order by objId asc""")
X_new = np.array(cur.fetchall())
y_preds = svm_clf.predict(X_new)
print("""There are {:d}, {:d}, and {:d} sources
in classes 1, 2, 3, respectively""".format(*list(np.bincount(y_preds)))) # be careful using bincount
```
**Problem 5d**
ANOTHER PROBLEM HERE INVESTIGATING THE DATA IN SOME WAY - LIGHT CURVE PLOTS OR SOMETHING, BUT NEED REAL DATA FIRST.
## Problem 6) Anomaly Detection
As we learned earlier - one of the primary goals of ANTARES is to reduce the stream of 10 million alerts on any given night to the single (or 10, or 100) most interesting objects. One possible definition of "interesting" is rarity - in which case it would be useful to add some form of anomaly detection to the pipeline. `scikit-learn` has [several different algorithms](http://scikit-learn.org/stable/auto_examples/covariance/plot_outlier_detection.html#sphx-glr-auto-examples-covariance-plot-outlier-detection-py) that can be used for anomaly detection. Here we will employ [isolation forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) which has many parallels to random forests, which we have previously learned about.
In brief, isolation forest builds an ensemble of decision trees where the splitting parameter in each node of the tree is selected randomly. In each tree the number of branches necessary to isolate each source is measured - outlier sources will, on average, require fewer splittings to be isolated than sources in high-density regions of the feature space. Averaging the number of branchings over many trees results in a relative ranking of the anomalousness (*yes, I just made up a word*) of each source.
**Problem 6a**
Using [`IsolationForest`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) in `sklearn.ensemble` - determine the 10 most isolated sources in the data set.
*Hint* - for `IsolationForest` you will want to use the `decision_function()` method rather than `predict_proba()`, which is what we have previously used with `sklearn.ensemble` models to get relative rankings from the model.
```
from sklearn.ensemble import IsolationForest
isoF_clf = IsolationForest(n_estimators = 100)
isoF_clf.fit(X_new)
anomaly_score = isoF_clf.decision_function(X_new)
print("The 10 most anomalous sources are: {}".format(np.arange(1,5001)[np.argsort(anomaly_score)[:10]]))
```
**Problem 6b**
Plot the light curves of the 2 most anomalous sources.
Can you identify why these sources have been selected as outliers?
```
lc485 = ANTARESlc("test_set_for_LSST_DSFP/FAKE00485.dat")
lc485.plot_multicolor_lc()
lc2030 = ANTARESlc("test_set_for_LSST_DSFP/FAKE02030.dat")
lc2030.plot_multicolor_lc()
```
*Write solution to **6b** here*
For source 485 - this looks like a supernova at intermediate redshifts. What might be throwing it is the outlier point. We never really made our features very robust to outliers.
For source 2030 - This is a weird faint source with multiple unsynced rises and falls in different bands.
## Challenge Problem) Simulate a Real ANTARES
The problem that we just completed features a key difference from the true ANTARES system - namely, all the light curves analyzed had a complete set of observations loaded into the database. One of the key challenges for LSST (and by extension ANTARES) is that the data will be *streaming* - new observations will be available every night, but the full light curves for all sources won't be available until the 10 yr survey is complete. In this problem, you will use the same data to simulate an LSST-like classification problem.
Assume that your training set (i.e. the first 100 sources loaded into the database) were observed prior to LSST, thus, these light curves can still be used in their entirety to train your classification models. For the test set of observations, simulate LSST by determining the min and max observation date and take 1-d quantized steps through these light curves. On each day when there are new observations, update the feature calculations for every source that has been newly observed. Classify those sources and identify possible anomalies.
Here are some things you should think about as you build this software:
1. Should you use the entire light curves for training-set objects when classifying sources with only a few data points?
2. How are you going to handle objects on the first epoch when they are detected?
3. What threshold (if any) are you going to set to notify the community about rarities that you have discovered
*Hint* - Since you will be reading these light curves from the database (and not from text files) the `ANTARESlc` class that we previously developed will not be useful. You will (likely) either need to re-write this class to interact with the database or figure out how to massage the query results to comply with the class definitions.
| github_jupyter |
# TensorFlow Distributed Training & Distributed Inference
For use cases involving large datasets, particularly those where the data is images, it often is necessary to perform distributed training on a cluster of multiple machines. Similarly, when it is time to set up an inference workflow, it also may be necessary to perform highly performant batch inference using a cluster. In this notebook, we'll examine distributed training and distributed inference with TensorFlow in Amazon SageMaker.
The model used for this notebook is a basic Convolutional Neural Network (CNN) based on [the Keras examples](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). We'll train the CNN to classify images using the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), a well-known computer vision dataset. It consists of 60,000 32x32 images belonging to 10 different classes (6,000 images per class). Here is a graphic of the classes in the dataset, as well as 10 random images from each:

## Setup
We'll begin with some necessary imports, and get an Amazon SageMaker session to help perform certain tasks, as well as an IAM role with the necessary permissions.
```
%matplotlib inline
import numpy as np
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/DEMO-tf-horovod-inference'
print('Bucket:\n{}'.format(bucket))
```
Now we'll run a script that fetches the dataset and converts it to the TFRecord format, which provides several conveniences for training models in TensorFlow.
```
!python generate_cifar10_tfrecords.py --data-dir ./data
```
For Amazon SageMaker hosted training on a cluster separate from this notebook instance, training data must be stored in Amazon S3, so we'll upload the data to S3 now.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10-tf')
display(inputs)
```
Finally, we'll perform some setup that will be common to all of the training jobs in this notebook. During training, it will helpful to review metric graphs about the training job such as validation accuracy. If we provide a set of metric definitions, Amazon SageMaker will be able to get the metrics directly from the training job logs, and send them to CloudWatch Metrics.
```
training_metric_definitions = [
{'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\.]+) - acc: [0-9\\.]+.*'},
{'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\.]+ - acc: ([0-9\\.]+).*'},
{'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\.]+ - acc: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_acc: ([0-9\\.]+).*'},
{'Name': 'validation:loss', 'Regex': '.*step - loss: [0-9\\.]+ - acc: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_acc: [0-9\\.]+.*'},
{'Name': 'sec/steps', 'Regex': '.* - \d+s (\d+)[mu]s/step - loss: [0-9\\.]+ - acc: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_acc: [0-9\\.]+'}
]
```
## (Optional) Hosted Training on a single machine
Amazon SageMaker provides a variety of model training options: besides training on a single machine, training can be done on a cluster of multiple machines using either parameter servers or Ring-AllReduce with Horovod. We'll begin by training a model on a single machine. This will be followed by distributed training on multiple machines to allow comparison; however if you prefer you may skip ahead to the **Distributed training** section of this notebook.
Initially we'll set up an Amazon SageMaker TensorFlow Estimator object with the details of the training job, such as type and number of instances, hyperparameters, etc.
```
from sagemaker.tensorflow import TensorFlow
single_machine_instance_type = 'ml.p3.2xlarge'
hyperparameters = {'epochs': 60, 'batch-size' : 256}
estimator_single = TensorFlow(base_job_name='cifar10-tf',
entry_point='train.py',
role=role,
framework_version='1.12.0',
py_version='py3',
hyperparameters=hyperparameters,
train_instance_count=1,
train_instance_type=single_machine_instance_type,
tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'file'}],
metric_definitions=training_metric_definitions)
```
Now we can call the `fit` method of the Estimator object to start training. During training, you can view the metrics we set up above by going to the Amazon SageMaker console, clicking the **Training jobs** link in the left panel, clicking the job name, then scrolling down to the **Monitor** section to view the metric graphs.
```
remote_inputs = {'train' : inputs+'/train', 'validation' : inputs+'/validation', 'eval' : inputs+'/eval'}
estimator_single.fit(remote_inputs, wait=True)
```
Sometimes it makes sense to perform training on a single machine. For large datasets, however, it may be necessary to perform distributed training on a cluster of multiple machines. In fact, it may be not only faster but cheaper to do distributed training on several machines rather than one machine. Fortunately, Amazon SageMaker makes it easy to run distributed training without having to manage cluster setup and tear down.
## Distributed training with Horovod
Horovod is an open source distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. It is an alternative to the more "traditional" parameter server method of performing distributed training. In Amazon SageMaker, Horovod is only available with TensorFlow version 1.12 or newer. Only a few lines of code are necessary to use Horovod for distributed training of a Keras model defined by the tf.keras API. For details, see the `train.py` script included with this notebook; the changes primarily relate to:
- importing Horovod.
- initializing Horovod.
- configuring GPU options and setting a Keras/tf.session with those options.
Once we have a training script, the next step is to set up an Amazon SageMaker TensorFlow Estimator object with the details of the training job. It is very similar to an Estimator for training on a single machine, except we specify a `distributions` parameter describing Horovod attributes such as the number of process per host, which is set here to the number of GPUs per machine. Beyond these few simple parameters and the few lines of code in the training script, there is nothing else you need to do to use distributed training with Horovod; Amazon SageMaker handles the heavy lifting for you and manages the underlying cluster setup.
```
from sagemaker.tensorflow import TensorFlow
hvd_instance_type = 'ml.p3.8xlarge'
hvd_processes_per_host = 4
hvd_instance_count = 2
distributions = {'mpi': {
'enabled': True,
'processes_per_host': hvd_processes_per_host
}
}
hyperparameters = {'epochs': 60, 'batch-size' : 256}
estimator_dist = TensorFlow(base_job_name='dist-cifar10-tf',
entry_point='train.py',
role=role,
framework_version='1.12.0',
py_version='py3',
hyperparameters=hyperparameters,
train_instance_count=hvd_instance_count,
train_instance_type=hvd_instance_type,
tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'dist'}],
metric_definitions=training_metric_definitions,
distributions=distributions)
remote_inputs = {'train' : inputs+'/train', 'validation' : inputs+'/validation', 'eval' : inputs+'/eval'}
estimator_dist.fit(remote_inputs, wait=True)
```
## Model Deployment with Amazon Elastic Inference
Amazon SageMaker provides both real time inference and batch inference. Although we will focus on batch inference below, let's start with a quick overview of setting up an Amazon SageMaker hosted endpoint for real time inference with TensorFlow Serving and image data. The processes for setting up hosted endpoints and Batch Transform jobs have significant differences. Additionally, we will discuss why and how to use Amazon Elastic Inference with the hosted endpoint.
### Deploying the Model
When considering the overall cost of a machine learning workload, inference often is the largest part, up to 90% of the total. If a GPU instance type is used for real time inference, it typically is not fully utilized because, unlike training, real time inference does not involve continuously inputting large batches of data to the model. Elastic Inference provides GPU acceleration suited for inference, allowing you to add inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance.
The `deploy` method of the Estimator object creates an endpoint which serves prediction requests in near real time. To utilize Elastic Inference with the SageMaker TensorFlow Serving container, simply provide an `accelerator_type` parameter, which determines the type of accelerator that is attached to your endpoint. Refer to the **Inference Acceleration** section of the [instance types chart](https://aws.amazon.com/sagemaker/pricing/instance-types) for a listing of the supported types of accelerators.
Here we'll use a general purpose CPU compute instance type along with an Elastic Inference accelerator: together they are much cheaper than the smallest P3 GPU instance type.
```
predictor = estimator_dist.deploy(initial_instance_count=1,
instance_type='ml.m5.xlarge',
accelerator_type='ml.eia1.medium')
```
### Real time inference
Now that we have a Predictor object wrapping a real time Amazon SageMaker hosted enpoint, we'll define the label names and look at a sample of 10 images, one from each class.
```
from IPython.display import Image, display
labels = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
images = []
for entry in os.scandir('sample-img'):
if entry.is_file() and entry.name.endswith("png"):
images.append('sample-img/' + entry.name)
for image in images:
display(Image(image))
```
Next we'll set up the Predictor object created by the `deploy` method call above. The TensorFlow Serving container in Amazon SageMaker uses the REST API, which requires requests in a specific JSON format.
```
import PIL
from sagemaker.predictor import json_serializer, json_deserializer
# TensorFlow Serving's request and response format is JSON
predictor.accept = 'application/json'
predictor.content_type = 'application/json'
predictor.serializer = json_serializer
predictor.deserializer = json_deserializer
def get_prediction(file_path):
image = PIL.Image.open(file_path)
to_numpy_list = np.asarray(image).astype(float)
instance = np.expand_dims(to_numpy_list, axis=0)
data = {'instances': instance.tolist()}
return labels[np.argmax(predictor.predict(data)['predictions'], axis=1)[0]]
predictions = [get_prediction(image) for image in images]
print(predictions)
```
## Batch Transform with Inference Pipelines
If a use case does not require individual predictions in near real-time, an Amazon SageMaker Batch Transform job is likely a better alternative. Although hosted endpoints also can be used for pseudo-batch prediction, the process is more involved than using the alternative Batch Transform, which is designed for large-scale, asynchronous batch inference.
A typical problem in working with batch inference is how to convert data into tensors that can be input to the model. For example, image data in .png or .jpg format cannot be directly input to a model, but rather must be converted first. Batch Transform provides facilities for doing this efficiently.
### Build a container for transforming image input
As mentioned above, the TensorFlow Serving container in Amazon SageMaker uses the REST API to serve prediction requests. This requires the input data to be converted to JSON format. One way to do this is to create a container to do the conversion, then create an overall Amazon SageMaker model that links the conversion container to the TensorFlow Serving container with the model.
In the next step, we'll create a container to transform payloads in .png image format into JSON objects that can be forwarded to the TensorFlow Serving container. To do this, we've created a simple Python Flask app that does the transformation, the code for this container is available in the `./image-transformer-container/` directory. First, we'll build a Docker image for the container:
```
!docker build -t image-transformer ./image-transformer-container/
```
### Push container to ECR
Next, we'll push the Docker image to an ECR repository in your account. In order to push the container to ECR, the execution role attached to this notebook should have permissions to create a repository, set a repository policy, and upload a Docker image.
```
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
region = boto3.session.Session().region_name
ecr_repository = 'image-transformer'
tag = ':latest'
transformer_repository_uri = '{}.dkr.ecr.{}.amazonaws.com/{}'.format(account_id, region, ecr_repository + tag)
# docker login
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
# create ecr repository
!aws ecr create-repository --repository-name $ecr_repository
# attach policy allowing sagemaker to pull this image
!aws ecr set-repository-policy --repository-name $ecr_repository --policy-text "$( cat ./image-transformer-container/ecr_policy.json )"
!docker tag {ecr_repository + tag} $transformer_repository_uri
!docker push $transformer_repository_uri
```
### Create a Model with an Inference Pipeline
Now that we have two separate containers for transforming the data and serving predictions, we'll create an Amazon SageMaker Model with the two containers chained together (image transformer -> TensorFlow Serving). The Model conveniently packages together the functionality required for an Inference Pipeline.
```
from sagemaker.tensorflow.serving import Model
from time import gmtime, strftime
client = boto3.client('sagemaker')
model_name = "image-to-tfserving-{}".format(strftime("%d-%H-%M-%S", gmtime()))
transform_container = {
"Image": transformer_repository_uri
}
estimator = estimator_dist
tf_serving_model = Model(model_data=estimator.model_data,
role=sagemaker.get_execution_role(),
image=estimator.image_name,
framework_version=estimator.framework_version,
sagemaker_session=estimator.sagemaker_session)
batch_instance_type = 'ml.p3.2xlarge'
tf_serving_container = tf_serving_model.prepare_container_def(batch_instance_type)
model_params = {
"ModelName": model_name,
"Containers": [
transform_container,
tf_serving_container
],
"ExecutionRoleArn": sagemaker.get_execution_role()
}
client.create_model(**model_params)
```
### Run a Batch Transform job
Next, we'll run a Batch Transform job. More specifically, we'll perform distributed inference on a cluster of two instances. As an additional optimization, we'll set the `max_concurrent_transforms` parameter of the Transformer object, which controls the maximum number of parallel requests that can be sent to each instance in a transform job.
```
input_data_path = 's3://sagemaker-sample-data-{}/tensorflow/cifar10/images/png'.format(sagemaker_session.boto_region_name)
output_data_path = 's3://{}/{}/{}'.format(bucket, prefix, 'batch-predictions')
batch_instance_count = 2
concurrency = 100
transformer = sagemaker.transformer.Transformer(
model_name = model_name,
instance_count = batch_instance_count,
instance_type = batch_instance_type,
max_concurrent_transforms = concurrency,
strategy = 'MultiRecord',
output_path = output_data_path,
assemble_with= 'Line',
base_transform_job_name='cifar-10-image-transform',
sagemaker_session=sagemaker_session,
)
transformer.transform(data = input_data_path, content_type = 'application/x-image')
transformer.wait()
```
### Inspect Batch Transform output
Finally, we can inspect the output files of our Batch Transform job to see the predictions. First we'll download the prediction files locally, then extract the predictions from them.
```
!aws s3 cp --quiet --recursive $transformer.output_path ./batch_predictions
import json
import re
total = 0
correct = 0
predicted = []
actual = []
for entry in os.scandir('batch_predictions'):
try:
if entry.is_file() and entry.name.endswith("out"):
with open(entry, 'r') as f:
jstr = json.load(f)
results = [float('%.3f'%(item)) for sublist in jstr['predictions'] for item in sublist]
class_index = np.argmax(np.array(results))
predicted_label = labels[class_index]
predicted.append(predicted_label)
actual_label = re.search('([a-zA-Z]+).png.out', entry.name).group(1)
actual.append(actual_label)
is_correct = (predicted_label in entry.name) or False
if is_correct:
correct += 1
total += 1
except Exception as e:
print(e)
continue
```
Let's calculate the accuracy of the predictions.
```
print('Out of {} total images, accurate predictions were returned for {}'.format(total, correct))
accuracy = correct / total
print('Accuracy is {:.1%}'.format(accuracy))
```
The accuracy from the batch transform job on 10000 test images never seen during training is fairly close to the accuracy achieved during training on the validation set. This is an indication that the model is not overfitting and should generalize fairly well to other unseen data.
Next we'll plot a confusion matrix, which is a tool for visualizing the performance of a multiclass model. It has entries for all possible combinations of correct and incorrect predictions, and shows how often each one was made by our model. Ours will be row-normalized: each row sums to one, so that entries along the diagonal correspond to recall.
```
import pandas as pd
import seaborn as sns
confusion_matrix = pd.crosstab(pd.Series(actual), pd.Series(predicted), rownames=['Actuals'], colnames=['Predictions'], normalize='index')
sns.heatmap(confusion_matrix, annot=True, fmt='.2f', cmap="YlGnBu").set_title('Confusion Matrix')
```
If our model had 100% accuracy, and therefore 100% recall in every class, then all of the predictions would fall along the diagonal of the confusion matrix. Here our model definitely is not 100% accurate, but manages to achieve good recall for most of the classes, though it performs worse for some classes, such as cats.
# Extensions
Although we did not demonstrate them in this notebook, Amazon SageMaker provides additional ways to make distributed training more efficient for very large datasets:
- **VPC training**: performing Horovod training inside a VPC improves the network latency between nodes, leading to higher performance and stability of Horovod training jobs.
- **Pipe Mode**: using [Pipe Mode](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html#your-algorithms-training-algo-running-container-inputdataconfig) reduces startup and training times. Pipe Mode streams training data from S3 as a Linux FIFO directly to the algorithm, without saving to disk. For a small dataset such as CIFAR-10, Pipe Mode does not provide any advantage, but for very large datasets where training is I/O bound rather than CPU/GPU bound, Pipe Mode can substantially reduce startup and training times.
# Cleanup
To avoid incurring charges due to a stray endpoint, delete the Amazon SageMaker endpoint if you no longer need it:
```
sagemaker_session.delete_endpoint(predictor.endpoint)
```
| github_jupyter |
<img src="./pictures/DroneApp_logo.png" style="float:right; max-width: 180px; display: inline" alt="INSA" />
<img src="./pictures/logo_sizinglab.png" style="float:right; max-width: 100px; display: inline" alt="INSA" />
## Algorithm B: After reduction of active constraints using the MP1 principle
**Scipy** and **math** packages will be used for this notebook in order to illustrate the optimization algorithms of python.
**Ipywidgets** is used here to give a interactive visualization to the results.
```
import scipy
import scipy.optimize
from math import pi
from math import sqrt
import math
import timeit
import time
import numpy as np
import ipywidgets as widgets
from ipywidgets import interactive
from IPython.display import display
import pandas as pd
```
The global sizing procedure of the multirotor drone follows this XDSM diagram:
*Sizing procedure for multi-rotor drone after monotonicity analysis.*

The following critical constraints are detected and removed:
$g_{i}(M_{total}^+,k_{frame}^+,L_{arm}^+,D_{out}^-)\text{ critical w.r.t }
D_{out} \rightarrow D_{out}=\sqrt[3]{\frac{32\cdot F_{max,arm} \cdot L_{arm}}{\pi \cdot \sigma_{max} \cdot (1-D_{in}/D_{out})}}$
$g_{j}(M_{total}^+,nD^-,\beta^-,L_{arm}^-)\text{ critical w.r.t } L_{arm} \rightarrow
L_{arm}=\frac{D_{pro}}{2\cdot sin(\frac{\pi}{Narm})}$
They are transformed to equality in the sizing code and the design variables of $L_{arm}$ and $D_{out}$ are removed.
#### 2.- Problem Definition
```
# Specifications
# Load
M_load=4 # [kg] load mass
# Autonomy
t_h=18 # [min] time of hover fligth
k_maxthrust=3. # Ratio max thrust-hover
# Architecture of the multi-rotor drone (4,6, 8 arms, ...)
Narm=8 # [-] number of arm
Np_arm=1 # [-] number of propeller per arm (1 or 2)
Npro=Np_arm*Narm # [-] Propellers number
# Motor Architecture
Mod=0 # Chose between 0 for 'Direct Drive' or 1 for Gear Drive
#Maximum climb speed
V_cl=6 # [m/s] max climb speed
CD= 1.3 #[] drag coef
A_top=0.09 #[m^2] top surface. For a quadcopter: Atop=1/2*Lb^2+3*pi*Dpro^2/4
# Propeller characteristics
NDmax= 105000/60*.0254# [Hz.m] max speed limit (N.D max)
# Air properties
rho_air=1.18 # [kg/m^3] air density
# MTOW
MTOW = 360. # [kg] maximal mass
# Objectif
MaxTime=False # Objective
```
#### 3.- Sizing Code
```
# -----------------------
# sizing code
# -----------------------
# inputs:
# - param: optimisation variables vector (reduction ratio, oversizing coefficient)
# - arg: selection of output
# output:
# - objective if arg='Obj', problem characteristics if arg='Prt', constraints other else
def SizingCode(param, arg):
# Design variables
# ---
Mtotal=param[0] # initial mass [kg]
ND=param[1] # rotational speed per diameter [Hz.m]
Tmot=param[2] # Nominal motor torque [Nm]
Ktmot=param[3] # Motor constant [Nm/A]
P_esc=param[4] # ESC power [W]
V_bat=param[5] # Battery voltage [V]
C_bat=param[6] # Battery capacity [A.s]
beta=param[7] # pitch/diameter ratio of the propeller
J=param[8] # advance ratio [-]
D_ratio=param[9] # Ratio inner-to-outer arm diameter [-]
if Mod==1:
Nred=param[10] # Reduction Ratio [-]
# Hover, Climbing & Take-Off thrust
# ---
Mtotal
F_pro_hover=Mtotal*(9.81)/Npro # [N] Thrust per propeller for hover
F_pro_to=F_pro_hover*k_maxthrust # [N] Max Thrust per propeller
F_pro_cl=(Mtotal*9.81+0.5*rho_air*CD*A_top*V_cl**2)/Npro # [N] Thrust per propeller for climbing
# Propeller characteristicss
# Ref : APC static
C_t_sta=4.27e-02 + 1.44e-01 * beta # Thrust coef with T=C_T.rho.n^2.D^4
C_p_sta=-1.48e-03 + 9.72e-02 * beta # Power coef with P=C_p.rho.n^3.D^5
Dpro_ref=11*.0254 # [m] diameter
Mpro_ref=0.53*0.0283 # [kg] mass
# Ref: APC dynamics
C_t_dyn=0.02791-0.06543*J+0.11867*beta+0.27334*beta**2-0.28852*beta**3+0.02104*J**3-0.23504*J**2+0.18677*beta*J**2 # thrust coef for APC props in dynamics
C_p_dyn=0.01813-0.06218*beta+0.00343*J+0.35712*beta**2-0.23774*beta**3+0.07549*beta*J-0.1235*J**2 # power coef for APC props in dynamics
#Choice of diameter and rotational speed from a maximum thrust
Dpro=(F_pro_to/(C_t_sta*rho_air*ND**2))**0.5 # [m] Propeller diameter
n_pro_to=ND/Dpro # [Hz] Propeller speed
n_pro_cl=sqrt(F_pro_cl/(C_t_dyn*rho_air*Dpro**4)) # [Hz] climbing speed
# Propeller selection with take-off scenario
Wpro_to=n_pro_to*2*3.14 # [rad/s] Propeller speed
Mpro=Mpro_ref*(Dpro/Dpro_ref)**3 # [kg] Propeller mass
Ppro_to=C_p_sta*rho_air*n_pro_to**3*Dpro**5# [W] Power per propeller
Qpro_to=Ppro_to/Wpro_to # [N.m] Propeller torque
# Propeller torque& speed for hover
n_pro_hover=sqrt(F_pro_hover/(C_t_sta*rho_air*Dpro**4)) # [Hz] hover speed
Wpro_hover=n_pro_hover*2*3.14 # [rad/s] Propeller speed
Ppro_hover=C_p_sta*rho_air*n_pro_hover**3*Dpro**5# [W] Power per propeller
Qpro_hover=Ppro_hover/Wpro_hover # [N.m] Propeller torque
#V_bat_est=k_vb*1.84*(Ppro_max)**(0.36) # [V] battery voltage estimation
#Propeller torque &speed for climbing
Wpro_cl=n_pro_cl*2*3.14 # [rad/s] Propeller speed for climbing
Ppro_cl=C_p_dyn*rho_air*n_pro_cl**3*Dpro**5# [W] Power per propeller for climbing
Qpro_cl=Ppro_cl/Wpro_cl # [N.m] Propeller torque for climbing
# Motor selection & scaling laws
# ---
# Motor reference sized from max thrust
# Ref : AXI 5325/16 GOLD LINE
Tmot_ref=2.32 # [N.m] rated torque
Tmot_max_ref=85/70*Tmot_ref # [N.m] max torque
Rmot_ref=0.03 # [Ohm] resistance
Mmot_ref=0.575 # [kg] mass
Ktmot_ref=0.03 # [N.m/A] torque coefficient
Tfmot_ref=0.03 # [N.m] friction torque (zero load, nominal speed)
#Motor speeds:
if Mod==1:
W_hover_motor=Wpro_hover*Nred # [rad/s] Nominal motor speed with reduction
W_cl_motor=Wpro_cl*Nred # [rad/s] Motor Climb speed with reduction
W_to_motor=Wpro_to*Nred # [rad/s] Motor take-off speed with reduction
else:
W_hover_motor=Wpro_hover # [rad/s] Nominal motor speed
W_cl_motor=Wpro_cl # [rad/s] Motor Climb speed
W_to_motor=Wpro_to # [rad/s] Motor take-off speed
#Motor torque:
if Mod==1:
Tmot_hover=Qpro_hover/Nred # [N.m] motor nominal torque with reduction
Tmot_to=Qpro_to/Nred # [N.m] motor take-off torque with reduction
Tmot_cl=Qpro_cl/Nred # [N.m] motor climbing torque with reduction
else:
Tmot_hover=Qpro_hover# [N.m] motor take-off torque
Tmot_to=Qpro_to # [N.m] motor take-off torque
Tmot_cl=Qpro_cl # [N.m] motor climbing torque
Tmot # [N.m] required motor nominal torque for reductor
Mmot=Mmot_ref*(Tmot/Tmot_ref)**(3/3.5) # [kg] Motor mass
Tmot_max=Tmot_max_ref*(Tmot/Tmot_ref)**(1) # [N.m] max torque
Ktmot # [N.m/A] or [V/(rad/s)] Kt motor (RI term is missing)
Rmot=Rmot_ref*(Tmot/Tmot_ref)**(-5/3.5)*(Ktmot/Ktmot_ref)**(2) # [Ohm] motor resistance
Tfmot=Tfmot_ref*(Tmot/Tmot_ref)**(3/3.5) # [N.m] Friction torque
# Hover current and voltage
Imot_hover = (Tmot_hover+Tfmot)/Ktmot # [I] Current of the motor per propeller
Umot_hover = Rmot*Imot_hover + W_hover_motor*Ktmot # [V] Voltage of the motor per propeller
P_el_hover = Umot_hover*Imot_hover # [W] Hover : output electrical power
# Take-Off current and voltage
Imot_to = (Tmot_to+Tfmot)/Ktmot # [I] Current of the motor per propeller
Umot_to = Rmot*Imot_to + W_to_motor*Ktmot # [V] Voltage of the motor per propeller
P_el_to = Umot_to*Imot_to # [W] Takeoff : output electrical power
# Climbing current and voltage
Imot_cl = (Tmot_cl+Tfmot)/Ktmot # [I] Current of the motor per propeller for climbing
Umot_cl = Rmot*Imot_cl + W_cl_motor*Ktmot # [V] Voltage of the motor per propeller for climbing
P_el_cl = Umot_cl*Imot_cl # [W] Power : output electrical power for climbing
#Gear box model
if Mod==1:
mg1=0.0309*Nred**2+0.1944*Nred+0.6389 # Ratio input pinion to mating gear [-]
WF=1+1/mg1+mg1+mg1**2+Nred**2/mg1+Nred**2 # Weight Factor (ƩFd2/C) [-]
k_sd=1000 # Surface durability factor [lb/in]
C=2*8.85*Tmot_hover/k_sd # Coefficient (C=2T/K) [in3]
Fd2=WF*C # Solid rotor volume [in3]
Mgear=Fd2*0.3*0.4535 # Mass reducer [kg] (0.3 is a coefficient evaluated for aircraft application and 0.4535 to pass from lb to kg)
Fdp2=C*(Nred+1)/Nred # Solid rotor pinion volume [in3]
dp=(Fdp2/0.7)**(1/3)*0.0254 # Pinion diameter [m] (0.0254 to pass from in to m)
dg=Nred*dp # Gear diameter [m]
di=mg1*dp # Inler diameter [m]
# Battery selection & scaling laws sized from hover
# ---
# Battery
# Ref : MK-quadro
Mbat_ref=.329 # [kg] mass
#Ebat_ref=4*3.7*3.3*3600 # [J] energy
#Ebat_ref=220*3600*.329 # [J]
Cbat_ref= 3.400*3600#[A.s]
Vbat_ref=4*3.7#[V]
Imax_ref=170#[A]
V_bat # [V] Battery voltage
# Hover --> autonomy
Mbat= Mbat_ref*C_bat/Cbat_ref*V_bat/Vbat_ref # Battery mass
Imax=Imax_ref*C_bat/Cbat_ref #[A] max current
I_bat = (P_el_hover*Npro)/.95/V_bat #[I] Current of the battery
t_hf = .8*C_bat/I_bat/60 # [min] Hover time
# ESC sized from max speed
# Ref : Turnigy K_Force 70HV
Pesc_ref=3108 # [W] Power
Vesc_ref=44.4 #[V]Voltage
Mesc_ref=.115 # [kg] Mass
P_esc
P_esc_cl=P_el_cl*V_bat/Umot_cl # [W] power electronic power max climb
P_esc_to=P_el_to*V_bat/Umot_to # [W] power electronic power max thrust
Mesc = Mesc_ref*(P_esc/Pesc_ref) # [kg] Mass ESC
Vesc = Vesc_ref*(P_esc/Pesc_ref)**(1/3)# [V] ESC voltage
# Frame sized from max thrust
# ---
Mfra_ref=.347 #[kg] MK7 frame
Marm_ref=0.14#[kg] Mass of all arms
# Length calculation
# sep= 2*pi/Narm #[rad] interior angle separation between propellers
Lbra=Dpro/2/(math.sin(pi/Narm)) #[m] length of the arm
# Static stress
# Sigma_max=200e6/4 # [Pa] Alu max stress (2 reduction for dynamic, 2 reduction for stress concentration)
Sigma_max=280e6/4 # [Pa] Composite max stress (2 reduction for dynamic, 2 reduction for stress concentration)
# Tube diameter & thickness
Dout=(F_pro_to*Lbra*32/(pi*Sigma_max*(1-D_ratio**4)))**(1/3) # [m] outer diameter of the beam
D_ratio # [m] inner diameter of the beam
# Mass
Marm=pi/4*(Dout**2-(D_ratio*Dout)**2)*Lbra*1700*Narm # [kg] mass of the arms
Mfra=Mfra_ref*(Marm/Marm_ref)# [kg] mass of the frame
# Thrust Bearing reference
# Ref : SKF 31309/DF
Life=5000 # Life time [h]
k_bear=1
Cd_bear_ref=2700 # Dynamic reference Load [N]
C0_bear_ref=1500 # Static reference load[N]
Db_ref=0.032 # Exterior reference diameter [m]
Lb_ref=0.007 # Reference lenght [m]
db_ref=0.020 # Interior reference diametere [m]
Mbear_ref=0.018 # Reference mass [kg]
# Thrust Bearing reference
# Ref : SKF 31309/DF
Life=5000 # Life time [h]
k_bear=1
Cd_bear_ref=2700 # Dynamic reference Load [N]
C0_bear_ref=1500 # Static reference load[N]
Db_ref=0.032 # Exterior reference diameter [m]
Lb_ref=0.007 # Reference lenght [m]
db_ref=0.020 # Interior reference diametere [m]
Mbear_ref=0.018 # Reference mass [kg]
# Thrust bearing model"""
L10=(60*(Wpro_hover*60/2/3.14)*(Life/10**6)) # Nominal endurance [Hours of working]
Cd_ap=(2*F_pro_hover*L10**(1/3))/2 # Applied load on bearing [N]
Fmax=2*4*F_pro_to/2
C0_bear=k_bear*Fmax # Static load [N]
Cd_bear=Cd_bear_ref/C0_bear_ref**(1.85/2)*C0_bear**(1.85/2) # Dynamic Load [N]
Db=Db_ref/C0_bear_ref**0.5*C0_bear**0.5 # Bearing exterior Diameter [m]
db=db_ref/C0_bear_ref**0.5*C0_bear**0.5 # Bearing interior Diameter [m]
Lb=Lb_ref/C0_bear_ref**0.5*C0_bear**0.5 # Bearing lenght [m]
Mbear=Mbear_ref/C0_bear_ref**1.5*C0_bear**1.5 # Bearing mass [kg]
# Objective and Constraints sum up
# ---
if Mod==0:
Mtotal_final = (Mesc+Mpro+Mmot+Mbear)*Npro+M_load+Mbat+Mfra+Marm #total mass without reducer
else:
Mtotal_final = (Mesc+Mpro+Mmot+Mgear+Mbear)*Npro+M_load+Mbat+Mfra+Marm #total mass with reducer
if MaxTime==True:
constraints = [(Mtotal-Mtotal_final)/Mtotal_final,#0
(Tmot_max-Tmot_to)/Tmot_max,#1
(Tmot_max-Tmot_cl)/Tmot_max,#2
(Tmot-Tmot_hover)/Tmot,#3
(V_bat-Umot_to)/V_bat,#4
(V_bat-Umot_cl)/V_bat,#5
(V_bat-Vesc)/V_bat,#6
(V_bat*Imax-Umot_to*Imot_to*Npro/0.95)/(V_bat*Imax),#7
(V_bat*Imax-Umot_cl*Imot_cl*Npro/0.95)/(V_bat*Imax),#8
(P_esc-P_esc_to)/P_esc,#9
(P_esc-P_esc_cl)/P_esc,#10
0.01+(J*n_pro_cl*Dpro-V_cl), #11
(-J*n_pro_cl*Dpro+V_cl), #12
(NDmax-ND)/NDmax,#13
(NDmax-n_pro_cl*Dpro)/NDmax,#14
(MTOW-Mtotal_final)/Mtotal_final#15
]
else:
constraints = [(Mtotal-Mtotal_final)/Mtotal_final,#0
(Tmot_max-Tmot_to)/Tmot_max,#1
(Tmot_max-Tmot_cl)/Tmot_max,#2
(Tmot-Tmot_hover)/Tmot,#3
(V_bat-Umot_to)/V_bat,#4
(V_bat-Umot_cl)/V_bat,#5
(V_bat-Vesc)/V_bat,#6
(V_bat*Imax-Umot_to*Imot_to*Npro/0.95)/(V_bat*Imax),#7
(V_bat*Imax-Umot_cl*Imot_cl*Npro/0.95)/(V_bat*Imax),#8
(P_esc-P_esc_to)/P_esc,#9
(P_esc-P_esc_cl)/P_esc,#10
0.01+(J*n_pro_cl*Dpro-V_cl), #11
(-J*n_pro_cl*Dpro+V_cl), #12
(NDmax-ND)/NDmax,#13
(NDmax-n_pro_cl*Dpro)/NDmax,#14
(t_hf-t_h)/t_hf,#15
]
# Objective and contraints
if arg=='Obj':
P=0 # Penalisation nulle
if MaxTime==False:
for C in constraints:
if (C<0.):
P=P-1e9*C
return Mtotal_final+P # for mass optimisation
else:
for C in constraints:
if (C<0.):
P=P-1e9*C
return 1/t_hf+P # for time optimisation
elif arg=='Prt':
col_names_opt = ['Type', 'Name', 'Min', 'Value', 'Max', 'Unit', 'Comment']
df_opt = pd.DataFrame()
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Mtotal', 'Min': bounds[0][0], 'Value': Mtotal, 'Max': bounds[0][1], 'Unit': '[kg]', 'Comment': 'Total mass '}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'ND', 'Min': bounds[1][0], 'Value': ND, 'Max': bounds[1][1], 'Unit': '[Hz.m]', 'Comment': 'Propeller speed limit'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Tmot', 'Min': bounds[2][0], 'Value': Tmot, 'Max': bounds[2][1], 'Unit': '[N.m]', 'Comment': 'Nominal torque'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Ktmot', 'Min': bounds[3][0], 'Value': Ktmot, 'Max': bounds[3][1], 'Unit': '[N.m/A]', 'Comment': 'Motor constant'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'P_esc', 'Min': bounds[4][0], 'Value': P_esc, 'Max': bounds[4][1], 'Unit': '[W]', 'Comment': 'ESC power'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'V_bat', 'Min': bounds[5][0], 'Value': V_bat, 'Max': bounds[5][1], 'Unit': '[V]', 'Comment': 'Battery voltage'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'C_bat', 'Min': bounds[6][0], 'Value': C_bat, 'Max': bounds[6][1], 'Unit': '[A.s]', 'Comment': 'Battery capacity'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'beta', 'Min': bounds[7][0], 'Value': beta, 'Max': bounds[7][1], 'Unit': '[-]', 'Comment': 'pitch/diameter ratio of the propeller'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'J', 'Min': bounds[8][0], 'Value': J, 'Max': bounds[8][1], 'Unit': '[-]', 'Comment': 'Advance ratio'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'D_ratio', 'Min': bounds[9][0], 'Value': D_ratio, 'Max': bounds[9][1], 'Unit': '[-]', 'Comment': 'aspect ratio inner/outer diameter arm'}])[col_names_opt]
if Mod==1:
df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'N_red', 'Min': bounds[10][0], 'Value': N_red, 'Max': bounds[12][1], 'Unit': '[-]', 'Comment': 'Reduction ratio'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 0', 'Min': 0, 'Value': constraints[0], 'Max': '-', 'Unit': '[-]', 'Comment': '(Mtotal-Mtotal_final)'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 1', 'Min': 0, 'Value': constraints[1], 'Max': '-', 'Unit': '[-]', 'Comment': '(Tmot_max-Tmot_to)/Tmot_max'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 2', 'Min': 0, 'Value': constraints[2], 'Max': '-', 'Unit': '[-]', 'Comment': '(Tmot_max-Tmot_cl)/Tmot_max'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 3', 'Min': 0, 'Value': constraints[3], 'Max': '-', 'Unit': '[-]', 'Comment': '(Tmot_nom-Tmot_hover)/Tmot_max'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 4', 'Min': 0, 'Value': constraints[4], 'Max': '-', 'Unit': '[-]', 'Comment': '(V_bat-Umot_to)/V_bat'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 5', 'Min': 0, 'Value': constraints[5], 'Max': '-', 'Unit': '[-]', 'Comment': '(V_bat-Umot_cl)/V_bat'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 6', 'Min': 0, 'Value': constraints[6], 'Max': '-', 'Unit': '[-]', 'Comment': '(V_bat-Vesc)/V_bat'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 7', 'Min': 0, 'Value': constraints[7], 'Max': '-', 'Unit': '[-]', 'Comment': '(V_bat*Imax-Umot_to*Imot_to*Npro/0.95)/(V_bat*Imax)'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 8', 'Min': 0, 'Value': constraints[8], 'Max': '-', 'Unit': '[-]', 'Comment': '(V_bat*Imax-Umot_cl*Imot_cl*Npro/0.95)/(V_bat*Imax)'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 9', 'Min': 0, 'Value': constraints[9], 'Max': '-', 'Unit': '[-]', 'Comment': '(P_esc-P_esc_to)/P_esc'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 10', 'Min': 0, 'Value': constraints[10], 'Max': '-', 'Unit': '[-]', 'Comment': '(P_esc-P_esc_cl)/P_esc'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 11', 'Min': 0, 'Value': constraints[11], 'Max': '-', 'Unit': '[-]', 'Comment': '0.01+(+J*n_pro_cl*Dpro-V_cl)'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 12', 'Min': 0, 'Value': constraints[12], 'Max': '-', 'Unit': '[-]', 'Comment': '(-J*n_pro_cl*Dpro+V_cl)'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 13', 'Min': 0, 'Value': constraints[13], 'Max': '-', 'Unit': '[-]', 'Comment': '(NDmax-ND)/NDmax'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 14', 'Min': 0, 'Value': constraints[14], 'Max': '-', 'Unit': '[-]', 'Comment': '(NDmax-n_pro_cl*Dpro)/NDmax'}])[col_names_opt]
if MaxTime==False:
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 15', 'Min': 0, 'Value': constraints[15], 'Max': '-', 'Unit': '[-]', 'Comment': '(t_hf-t_h)/t_hf'}])[col_names_opt]
else:
df_opt = df_opt.append([{'Type': 'Constraints', 'Name': 'Const 15', 'Min': 0, 'Value': constraints[15], 'Max': '-', 'Unit': '[-]', 'Comment': '(MTOW-Mtotal_final)/Mtotal_final'}])[col_names_opt]
df_opt = df_opt.append([{'Type': 'Objective', 'Name': 'Objective', 'Min': 0, 'Value': Mtotal_final, 'Max': '-', 'Unit': '[kg]', 'Comment': 'Total mass'}])[col_names_opt]
col_names = ['Type', 'Name', 'Value', 'Unit', 'Comment']
df = pd.DataFrame()
df = df.append([{'Type': 'Propeller', 'Name': 'F_pro_to', 'Value': F_pro_to, 'Unit': '[N]', 'Comment': 'Thrust for 1 propeller during Take Off'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'F_pro_cl', 'Value': F_pro_cl, 'Unit': '[N]', 'Comment': 'Thrust for 1 propeller during Take Off'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'F_pro_hov', 'Value': F_pro_hover, 'Unit': '[N]', 'Comment': 'Thrust for 1 propeller during Hover'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'rho_air', 'Value': rho_air, 'Unit': '[kg/m^3]', 'Comment': 'Air density'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'ND_max', 'Value': NDmax, 'Unit': '[Hz.m]', 'Comment': 'Max speed limit (N.D max)'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'Dpro_ref', 'Value': Dpro_ref, 'Unit': '[m]', 'Comment': 'Reference propeller diameter'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'M_pro_ref', 'Value': Mpro_ref, 'Unit': '[kg]', 'Comment': 'Reference propeller mass'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'C_t_sta', 'Value': C_t_sta, 'Unit': '[-]', 'Comment': 'Static thrust coefficient of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'C_t_dyn', 'Value': C_t_dyn, 'Unit': '[-]', 'Comment': 'Dynamic thrust coefficient of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'C_p_sta', 'Value': C_p_sta, 'Unit': '[-]', 'Comment': 'Static power coefficient of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'C_p_dyn', 'Value': C_p_dyn, 'Unit': '[-]', 'Comment': 'Dynamic power coefficient of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'D_pro', 'Value': Dpro, 'Unit': '[m]', 'Comment': 'Diameter of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'n_pro_cl', 'Value': n_pro_cl, 'Unit': '[Hz]', 'Comment': 'Rev speed of the propeller during climbing'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'n_pro_to', 'Value': n_pro_to, 'Unit': '[Hz]', 'Comment': 'Rev speed of the propeller during takeoff'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'n_pro_hov', 'Value': n_pro_hover, 'Unit': '[Hz]', 'Comment': 'Rev speed of the propeller during hover'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'P_pro_cl', 'Value': Ppro_cl, 'Unit': '[W]', 'Comment': 'Power on the mechanical shaft of the propeller during climbing'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'P_pro_to', 'Value': Ppro_to, 'Unit': '[W]', 'Comment': 'Power on the mechanical shaft of the propeller during takeoff'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'P_pro_hov', 'Value': Ppro_hover, 'Unit': '[W]', 'Comment': 'Power on the mechanical shaft of the propeller during hover'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'M_pro', 'Value': Mpro, 'Unit': '[kg]', 'Comment': 'Mass of the propeller'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'Omega_pro_cl', 'Value': Wpro_cl, 'Unit': '[rad/s]', 'Comment': 'Rev speed of the propeller during climbing'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'Omega_pro_to', 'Value': Wpro_to, 'Unit': '[rad/s]', 'Comment': 'Rev speed of the propeller during takeoff'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'Omega_pro_hov', 'Value': Wpro_hover, 'Unit': '[rad/s]', 'Comment': 'Rev speed of the propeller during hover'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'T_pro_hov', 'Value': Qpro_hover, 'Unit': '[N.m]', 'Comment': 'Torque on the mechanical shaft of the propeller during hover'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'T_pro_to', 'Value': Qpro_to, 'Unit': '[N.m]', 'Comment': 'Torque on the mechanical shaft of the propeller during takeoff'}])[col_names]
df = df.append([{'Type': 'Propeller', 'Name': 'T_pro_cl', 'Value': Qpro_cl, 'Unit': '[N.m]', 'Comment': 'Torque on the mechanical shaft of the propeller during climbing'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_max_mot_ref', 'Value': Tmot_max_ref, 'Unit': '[N.m]', 'Comment': 'Max torque'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'R_mot_ref', 'Value': Rmot_ref, 'Unit': '[Ohm]', 'Comment': 'Resistance'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'M_mot_ref', 'Value': Mmot_ref, 'Unit': '[kg]', 'Comment': 'Reference motor mass'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'K_mot_ref', 'Value': Ktmot_ref, 'Unit': '[N.m/A]', 'Comment': 'Torque coefficient'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_mot_fr_ref', 'Value': Tfmot_ref, 'Unit': '[N.m]', 'Comment': 'Friction torque (zero load, nominal speed)'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_nom_mot', 'Value': Tmot_hover, 'Unit': '[N.m]', 'Comment': 'Continuous of the selected motor torque'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_mot_to', 'Value': Tmot_to, 'Unit': '[N.m]', 'Comment': 'Transient torque possible for takeoff'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_max_mot', 'Value': Tmot_max, 'Unit': '[N.m]', 'Comment': 'Transient torque possible for climbing'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'R_mot', 'Value': Rmot, 'Unit': '[Ohm]', 'Comment': 'Resistance'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'M_mot', 'Value': Mmot, 'Unit': '[kg]', 'Comment': 'Motor mass'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'K_mot', 'Value': Ktmot, 'Unit': '[rad/s]', 'Comment': 'Torque constant of the selected motor'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'T_mot_fr', 'Value': Tfmot, 'Unit': '[N.m]', 'Comment': 'Friction torque of the selected motor'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'I_mot_hov', 'Value': Imot_hover, 'Unit': '[A]', 'Comment': 'Motor current for hover'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'I_mot_to', 'Value': Imot_to, 'Unit': '[A]', 'Comment': 'Motor current for takeoff'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'I_mot_cl', 'Value': Imot_cl, 'Unit': '[A]', 'Comment': 'Motor current for climbing'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'U_mot_cl', 'Value': Umot_hover, 'Unit': '[V]', 'Comment': 'Motor voltage for climbing'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'U_mot_to', 'Value': Umot_to, 'Unit': '[V]', 'Comment': 'Motor voltage for takeoff'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'U_mot', 'Value': Umot_hover, 'Unit': '[V]', 'Comment': 'Nominal voltage '}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'P_el_mot_cl', 'Value': P_el_cl, 'Unit': '[W]', 'Comment': 'Motor electrical power for climbing'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'P_el_mot_to', 'Value': P_el_to, 'Unit': '[W]', 'Comment': 'Motor electrical power for takeoff'}])[col_names]
df = df.append([{'Type': 'Motor', 'Name': 'P_el_mot_hov', 'Value': P_el_hover, 'Unit': '[W]', 'Comment': 'Motor electrical power for hover'}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'M_bat_ref', 'Value': Mbat_ref, 'Unit': '[kg]', 'Comment': 'Mass of the reference battery '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'M_esc_ref', 'Value': Mesc_ref, 'Unit': '[kg]', 'Comment': 'Reference ESC mass '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'P_esc_ref', 'Value': Pesc_ref, 'Unit': '[W]', 'Comment': 'Reference ESC power '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'N_s_bat', 'Value': np.ceil(V_bat/3.7), 'Unit': '[-]', 'Comment': 'Number of battery cells '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'U_bat', 'Value': V_bat, 'Unit': '[V]', 'Comment': 'Battery voltage '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'M_bat', 'Value': Mbat, 'Unit': '[kg]', 'Comment': 'Battery mass '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'C_bat', 'Value': C_bat, 'Unit': '[A.s]', 'Comment': 'Battery capacity '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'I_bat', 'Value': I_bat, 'Unit': '[A]', 'Comment': 'Battery current '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 't_hf', 'Value': t_hf, 'Unit': '[min]', 'Comment': 'Hovering time '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'P_esc', 'Value': P_esc, 'Unit': '[W]', 'Comment': 'Power electronic power (corner power or apparent power) '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'M_esc', 'Value': Mesc, 'Unit': '[kg]', 'Comment': 'ESC mass '}])[col_names]
df = df.append([{'Type': 'Battery & ESC', 'Name': 'V_esc', 'Value': Vesc, 'Unit': '[V]', 'Comment': 'ESC voltage '}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'N_arm', 'Value': Narm, 'Unit': '[-]', 'Comment': 'Number of arms '}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'N_pro_arm', 'Value': Np_arm, 'Unit': '[-]', 'Comment': 'Number of propellers per arm '}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'sigma_max', 'Value': Sigma_max, 'Unit': '[Pa]', 'Comment': 'Max admisible stress'}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'L_arm', 'Value': Lbra, 'Unit': '[m]', 'Comment': 'Length of the arm'}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'D_out', 'Value': Dout, 'Unit': '[m]', 'Comment': 'Outer diameter of the arm (tube)'}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'Marm', 'Value': Marm, 'Unit': '[kg]', 'Comment': '1 Arm mass'}])[col_names]
df = df.append([{'Type': 'Frame', 'Name': 'M_frame', 'Value': Mfra, 'Unit': '[kg]', 'Comment': 'Frame mass'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'M_load', 'Value': M_load, 'Unit': '[kg]', 'Comment': 'Payload mass'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 't_hf', 'Value': t_h, 'Unit': '[min]', 'Comment': 'Hovering time '}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'k_maxthrust', 'Value': k_maxthrust, 'Unit': '[-]', 'Comment': 'Ratio max thrust'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'N_arm', 'Value': Narm, 'Unit': '[-]', 'Comment': 'Number of arms '}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'N_pro_arm', 'Value': Np_arm, 'Unit': '[-]', 'Comment': 'Number of propellers per arm '}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'V_cl', 'Value': V_cl, 'Unit': '[m/s]', 'Comment': 'Climb speed'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'CD', 'Value': CD, 'Unit': '[-]', 'Comment': 'Drag coefficient'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'A_top', 'Value': A_top, 'Unit': '[m^2]', 'Comment': 'Top surface'}])[col_names]
df = df.append([{'Type': 'Specifications', 'Name': 'MTOW', 'Value': MTOW, 'Unit': '[kg]', 'Comment': 'Max takeoff Weight'}])[col_names]
items = sorted(df['Type'].unique().tolist())+['Optimization']
return df, df_opt
else:
return constraints
```
### 4.-Optimization variables
```
bounds=[(0,800),#M_total
(0,105000/60*.0254),#ND
(0.01,1000),#Tmot
(0,10),#Ktmot
(0,20000),#P_esc
(0,550),#V_bat
(0,200*3600),#C_bat
(0.3,0.6),#beta
(0,0.5),#J
(0,0.99),#D_ratio
(1,15),#Nred
]
```
### 5.-Result
```
# optimization with SLSQP algorithm
contrainte=lambda x: SizingCode(x, 'Const')
objectif=lambda x: SizingCode(x, 'Obj')
# Differential evolution omptimisation
start = time.time()
result = scipy.optimize.differential_evolution(func=objectif,
bounds=[(0,100),#M_total
(0,105000/60*.0254),#ND
(0.01,10),#Tmot
(0,1),#Ktmot
(0,1500),#P_esc
(0,150),#V_bat
(0,20*3600),#C_bat
(0.3,0.6),#beta
(0,0.5),#J
(0,0.99),#D_ratio
(1,15),#Nred
],maxiter=4000,
tol=1e-12)
# Final characteristics after optimization
end = time.time()
print("Operation time: %.5f s" %(end - start))
print("-----------------------------------------------")
print("Final characteristics after optimization :")
data=SizingCode(result.x, 'Prt')[0]
data_opt=SizingCode(result.x, 'Prt')[1]
pd.options.display.float_format = '{:,.3f}'.format
def view(x=''):
#if x=='All': return display(df)
if x=='Optimization' : return display(data_opt)
return display(data[data['Type']==x])
items = sorted(data['Type'].unique().tolist())+['Optimization']
w = widgets.Select(options=items)
interactive(view, x=w)
```
| github_jupyter |
UTR strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "1955839661"
* `standard`: UTR strings with proper whitespace in the proper places. Note that in the case of UTR, the compact format is the same as the standard one.
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_gb_utr()` and `validate_gb_utr()`.
### An example dataset containing UTR strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"utr": [
'1955839661',
'2955839661',
'BE 428759497',
'BE431150351',
"002 724 334",
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_gb_utr`
By default, `clean_gb_utr` will clean utr strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_gb_utr
clean_gb_utr(df, column = "utr")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_gb_utr(df, column = "utr", output_format="standard")
```
### `compact`
```
clean_gb_utr(df, column = "utr", output_format="compact")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned UTR strings is added with a title in the format `"{original title}_clean"`.
```
clean_gb_utr(df, column="utr", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_gb_utr(df, "utr", errors="coerce")
```
### `ignore`
```
clean_gb_utr(df, "utr", errors="ignore")
```
## 4. `validate_gb_utr()`
`validate_gb_utr()` returns `True` when the input is a valid UTR. Otherwise it returns `False`.
The input of `validate_gb_utr()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_gb_utr()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_gb_utr()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_gb_utr
print(validate_gb_utr("1955839661"))
print(validate_gb_utr("2955839661"))
print(validate_gb_utr('BE 428759497'))
print(validate_gb_utr('BE431150351'))
print(validate_gb_utr("004085616"))
print(validate_gb_utr("hello"))
print(validate_gb_utr(np.nan))
print(validate_gb_utr("NULL"))
```
### Series
```
validate_gb_utr(df["utr"])
```
### DataFrame + Specify Column
```
validate_gb_utr(df, column="utr")
```
### Only DataFrame
```
validate_gb_utr(df)
```
| github_jupyter |
```
import pystan
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
```
# Baseball example
This model tries to estimate the batting average (also abbreviated AVG) in the MLB league in the 2012 season. We are trying to estimate both the player's performance ($\theta_{i}$) and the position performance ($\omega_{c}$).
$$
\begin{align*}
y_{i,s|c} &\sim Binomial(N_{s|c}, \theta_{s|c}) \quad \text{ where } \quad s \in [0, S] \quad \text{ and } \quad c \in [0, C]\\
\theta_{s|c} &\sim Beta(\omega_{c} * (\kappa_{c} - 2) + 1, (1 - \omega_{c})*(\kappa_{c} - 2) + 1) \\
\kappa_{c} &\sim Gamma(S_{\kappa}, R_{\kappa}) \\
\omega_{c} &\sim Beta(\omega * (\kappa - 2) + 1, (1 - \omega)*(k - 2) + 1) \\
\omega &\sim Beta(A_{\omega}, B_{\omega}) \\
\kappa &\sim Gamma(S_{\kappa}, R_{\kappa}) \\
\end{align*}
$$
The key point here is that we are using a Binomial likelihood function and are using a Beta prior for each player's ability at batting. This Beta prior is however conditional to the player's position on the field, so the $\alpha$ and $\beta$ parameters for the prior are shared among players of the same role.
Another important fact is that we are using a re-parametrization of the Beta distribution. Instead of using the popular $\alpha$ and $\beta$ parameters, we are using:
$$
\begin{align*}
\alpha &= \omega * (\kappa - 2) + 1 \\
\beta &= (1 - \omega)*(\kappa - 2) + 1
\end{align*}
$$
Where $\omega$ represents the mode of the Beta distribution and $\kappa$ the concentration parameter. This reparametrization is useful because we are interested in having an estimate for how close the perfomance are for players of the same role. In this way we can easily assess if the with-in group variance differs from the nine groups of players.
Lastly, we set priors for the parameters $\omega$ and $\kappa$. This is useful for two reasons. First we didn't have to hard-code them and let the model estimate the more likely parameters based on the data and the model itself. Second, we can estimate the league level values for $\omega$ and $\kappa$. This could be useful if we want to make inferences on new players, or want to compare MLB players performance against players of a different league. The latter comparison of course should be taken cautiosly since doesn't really consider the pitchers' ability. Another league could in fact have way higher AVG, simply because the quality of the pitchers playing in that league is poor.
For details about the model, please consult the book.
```
baseball = pd.read_csv('./data/BattingAverage.csv')
baseball = baseball.sort_values('PriPosNumber').reset_index(drop=True)
baseball.head()
```
Before jumping into the modelling part, we are going to check for presence of duplicates in the dataset and corrupted data.
```
# check if there are duplicates in the dataset
if baseball.shape[0] == baseball.Player.nunique():
print('Okay')
else:
print('Presence of duplicate players')
print(baseball.shape[0], baseball.Name.nunique())
# check if any player has more Hits recorded than AtBats
if all(baseball.AtBats.values - baseball.Hits.values < 0):
print('Presence of faulty measurements')
else:
print('Okay')
```
The following chunk will extract the necessary data to fit our model, and organise it in a digestible form for Stan.
```
at_bat = baseball.AtBats.values
hits = baseball.Hits.values
positions = baseball.PriPosNumber.values
n_positions = baseball.PriPosNumber.nunique()
counts = baseball.groupby('PriPosNumber').PlayerNumber.count()
start = [1]
for idx, count in enumerate(counts):
starting_point = start[idx] + count
start.extend([starting_point])
length = [start[idx+1] - start[idx] for idx in range(len(start)-1) ]
variables = [at_bat, hits, positions, n_positions, start, length]
for variable in variables:
print(variable)
baseball_data = {
'P': baseball.shape[0],
'positions': positions,
'start': start,
'length': length,
'N_positions': n_positions,
'N': at_bat,
'y': hits,
'S': 0.01,
'R': 0.01,
'A': 1,
'B': 1
}
import datetime
start_dt = datetime.datetime.now()
fit = pystan.stan(file='baseball.stan', data=baseball_data,
iter=5000, chains=4)
end_dt = datetime.datetime.now()
print("Time taken to run the model: {}".format(end_dt - start_dt))
print(fit)
fit.plot()
plt.show()
```
Let's dig a bit into the results, and in particular compare the group level estimates to answer questions like: *"Are MLB pitchers worst than catches at batting?"*
```
baseball.groupby(['PriPosNumber', 'PriPos']).Player.nunique()
```
Looking at the results we can notice that *pitchers* are pretty bad at batting. They are consistently poorer hitters than any other player in a different role. We can assess what is the probability of *pitchers* to be worst on average than *catchers* (the second worst category.
```
pitchers = fit.extract()['omega'][:,0]
catchers = fit.extract()['omega'][:,1]
diff = pitchers - catchers
mode = stats.mode([round(i, 3) for i in diff])
plt.subplot(2,2,1)
plt.hist(pitchers, bins=100, color='lightblue', edgecolor='white')
plt.title('Pitcher')
plt.xlabel(r'$\omega$'); plt.ylabel('Samples')
plt.subplot(2,2,2)
plt.hist(diff, bins=100, color='lightblue', edgecolor='white')
plt.xlabel(r"Difference of $\omega$'s"); plt.ylabel('Samples')
plt.title('Pitcher - Catcher (mode = {})'.format(mode[0][0]))
plt.subplot(2,2,3)
plt.scatter(pitchers, catchers, marker='o', color='lightblue', edgecolor='white', alpha=.75)
plt.xlabel('Pitcher')
plt.ylabel('Catcher')
plt.subplot(2,2,4)
plt.hist(catchers, bins=100, color='lightblue', edgecolor='white')
plt.title('Catcher')
plt.xlabel(r'$\omega$'); plt.ylabel('Samples')
plt.tight_layout()
plt.show()
p = np.mean(diff < 0)
print('The probability of catchers being better than pitchers at batting is {0:.0%}'.format(p))
```
It's probably more interesting to compare the best groups of players, since their posterior distributions look very close. In this case we are comparing Center Field against Right Field.
```
centerfield = fit.extract()['omega'][:,7]
rightfield = fit.extract()['omega'][:,8]
diff = centerfield - rightfield
mode = stats.mode([round(i, 3) for i in diff])
plt.subplot(2,2,1)
plt.hist(leftfield, bins=100, color='lightblue', edgecolor='white')
plt.title('Center Field')
plt.xlabel(r'$\omega$'); plt.ylabel('Samples')
plt.subplot(2,2,2)
plt.hist(diff, bins=100, color='lightblue', edgecolor='white')
plt.xlabel(r"Difference of $\omega$'s"); plt.ylabel('Samples')
plt.title('Center Field - Right Field (mode = {})'.format(mode[0][0]))
plt.subplot(2,2,3)
plt.scatter(centerfield, rightfield, marker='o', color='lightblue', edgecolor='white', alpha=.75)
plt.xlabel('Center Field')
plt.ylabel('Right Field')
plt.subplot(2,2,4)
plt.hist(rightfield, bins=100, color='lightblue', edgecolor='white')
plt.title('Right Field')
plt.xlabel(r'$\omega$'); plt.ylabel('Samples')
plt.tight_layout()
plt.show()
p = np.mean(diff < 0)
print('The probability of Left Field being better than Right Field at batting is {0:.0%}'.format(p))
```
| github_jupyter |
```
import GetOldTweets3 as got
import lxml
import pyquery
import requests
import os
import sys
import time
import pandas as pd
import spacy
from datetime import datetime as dt
from dateutil.parser import parse
import calendar
from newsplease.config import CrawlerConfig
from newsplease.config import JsonConfig
from newsplease.helper import Helper
from newspaper import Article
import nltk
import warnings
warnings.filterwarnings('ignore')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# nltk.download('vader_lexicon')
sia = SentimentIntensityAnalyzer()
import matplotlib.pyplot as plt
#from mlxtend.plotting import plot_decision_regions
from cycler import cycler
from jupyterthemes import jtplot
jtplot.style(theme='monokai', context='notebook', ticks=True, grid=False)
import seaborn as sns
sns.set_style("whitegrid")
project_dir = str(os.path.dirname((os.path.abspath(''))))
sys.path.append(project_dir)
print(project_dir)
figures_folder = project_dir + '/Images/'
base_path = project_dir + '/data/'
print(base_path)
def import_files(path_name, name):
file = path_name+name
df = pd.read_csv(file, delimiter=',')
return df
def clean_table(df, cols):
df = df[cols]
df = df.dropna()
df = df.drop_duplicates().reset_index()
df = df.drop(columns='index')
for i in range(len(df)-1):
df.loc[i,'retrieved_on'] = dt.utcfromtimestamp(float(df.loc[i,'retrieved_on']))
df.loc[i,'date'] = dt.strptime(str(df.loc[i,'retrieved_on']), '%Y-%m-%d %H:%M:%S')
df = df.drop(columns='retrieved_on')
result_df = df.sort_values(['date'])
return result_df
def add_senti(df):
senti = []
for row in range(len(df)-1):
senti.append(sia.polarity_scores(df.loc[row,'selftext']))
senti_df = pd.DataFrame(senti)
result_df = pd.concat([df, senti_df], join='outer', axis=1)
return result_df
stocks = import_files(path_name=base_path, name='/raw_scraped/stocks.csv')
cols = ['author', 'selftext', 'retrieved_on', 'score']
stocks_df = clean_table(stocks, cols)
stocks_senti_df = add_senti(stocks_df)
stocks_senti_df.head()
crypto = import_files(path_name=base_path, name='/scraped/crypto.csv')
cols = ['author', 'selftext', 'retrieved_on', 'score']
crypto_df = clean_table(crypto, cols)
crypto_senti_df = add_senti(crypto_df)
crypto_senti_df.head()
gold = import_files(path_name=base_path, name='/scraped/gold.csv')
cols = ['author', 'selftext', 'retrieved_on', 'score']
gold_df = clean_table(gold, cols)
gold_senti_df = add_senti(gold_df)
gold_senti_df.head()
bit = import_files(path_name=base_path, name='/scraped/bitcoin.csv')
cols = ['author', 'selftext', 'retrieved_on', 'score']
bit_df = clean_table(bit, cols)
bit_senti_df = add_senti(bit_df)
bit_senti_df.head()
def date_filter(df_list, start_date, end_date):
start = dt.strptime(start_date, '%Y-%m-%d')
end = dt.strptime(end_date, '%Y-%m-%d')
for i in range(0, len(df_list)):
mask = (df_list[i]['date'] > start) & (df_list[i]['date'] <= end)
df_list[i] = df_list[i][mask]
df_list[i] = df_list[i].sort_values('date')
return df_list
def plot_senti(df_list, legend):
default_cycler = (cycler(color=['r', 'b', 'g', 'orange']) + cycler(marker=['o', '+', '*', 'v']))
plt.rc('lines', linewidth=4)
plt.rc('axes', prop_cycle=default_cycler)
plt.figure(figsize=(12,6))
for i in range(0, len(legend)):
x=df_list[i].loc[:,'date']
y=df_list[i].loc[:,'compound']
plt.plot(x, y, label=legend[i])
plt.legend(fontsize='16')
plt.ylabel('Sentiment', fontsize='16')
plt.xlabel('Date', fontsize='16')
plt.title('Sentiment Analysis for Reddit Financial Data')
plt.show()
scraped_list = [stocks_senti_df, crypto_senti_df, bit_senti_df, gold_senti_df]
start_date='2019-1-1'
end_date='2020-02-24'
df_list = date_filter(scraped_list, start_date, end_date)
df_list
legend_list = ['stocks', 'crypto', 'bitcoin', 'gold']
plot_senti(df_list, legend_list)
scraped_list = [stocks_senti_df, bit_senti_df]
start_date='2019-1-1'
end_date='2020-02-24'
df_list = date_filter(scraped_list, start_date, end_date)
df_list
legend_list = ['stocks', 'bitcoin']
plot_senti(df_list, legend_list)
scraped_list = [stocks_senti_df, crypto_senti_df]
start_date='2019-1-1'
end_date='2020-02-24'
df_list = date_filter(scraped_list, start_date, end_date)
df_list
legend_list = ['stocks', 'crypto']
plot_senti(df_list, legend_list)
```
| github_jupyter |
# Bayesian Updating with Conjugate Priors
## Setup
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import scipy.stats as stats
from matplotlib.ticker import FuncFormatter
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
mpl.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']
np.random.seed(42)
sns.set_style('dark')
```
## Formatting Helper
```
def format_plot(axes, i, p, y, trials, success, true_p, tmle, tmap=None):
fmt = FuncFormatter(lambda x, _: f'{x:.0%}')
if i >= 6:
axes[i].set_xlabel("$p$, Success Probability")
axes[i].xaxis.set_major_formatter(fmt)
else:
axes[i].axes.get_xaxis().set_visible(False)
if i % 3 == 0:
axes[i].set_ylabel("Posterior Probability")
axes[i].set_yticks([], [])
axes[i].plot(p, y, lw=1, c='k')
axes[i].fill_between(p, y, color='darkblue', alpha=0.4)
axes[i].vlines(true_p, 0, max(10, np.max(y)), color='k', linestyle='--', lw=1)
axes[i].set_title(f'Trials: {trials:,d} - Success: {success:,d}')
if i > 0:
smle = r"$\theta_{{\mathrm{{MLE}}}}$ = {:.2%}".format(tmle)
axes[i].text(x=.02, y=.85, s=smle, transform=axes[i].transAxes)
smap = r"$\theta_{{\mathrm{{MAP}}}}$ = {:.2%}".format(tmap)
axes[i].text(x=.02, y=.75, s=smap, transform=axes[i].transAxes)
return axes[i]
```
## Simulate Coin Tosses & Updates of Posterior
```
n_trials = [0, 1, 3, 5, 10, 25, 50, 100, 500]
outcomes = stats.bernoulli.rvs(p=0.5, size=n_trials[-1])
p = np.linspace(0, 1, 100)
# uniform (uninformative) prior
a = b = 1
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True)
axes = axes.flatten()
fmt = FuncFormatter(lambda x, _: f'{x:.0%}')
for i, trials in enumerate(n_trials):
successes = outcomes[:trials]
theta_mle = np.mean(successes)
heads = sum(successes)
tails = trials - heads
update = stats.beta.pdf(p, a + heads , b + tails)
theta_map = pd.Series(update, index=p).idxmax()
axes[i] = format_plot(axes, i, p, update, trials=trials, success=heads,
true_p=.5, tmle=theta_mle, tmap=theta_map)
title = 'Bayesian Probabilities: Updating the Posterior'
fig.suptitle(title, y=1.02, fontsize=14)
fig.tight_layout()
```
## Stock Price Moves
```
sp500_returns = pd.read_hdf('../data/assets.h5', key='sp500/prices').loc['2010':, 'close']
sp500_binary = (sp500_returns.pct_change().dropna() > 0).astype(int)
n_days = [0, 1, 3, 5, 10, 25, 50, 100, 500]
# random sample of trading days
# outcomes = sp500_binary.sample(n_days[-1])
# initial 500 trading days
outcomes = sp500_binary.iloc[:n_days[-1]]
p = np.linspace(0, 1, 100)
# uniform (uninformative) prior
a = b = 1
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(14, 7), sharex=True)
axes = axes.flatten()
for i, days in enumerate(n_days):
successes = outcomes.iloc[:days]
theta_mle = successes.mean()
up = successes.sum()
down = days - up
update = stats.beta.pdf(p, a + up , b + down)
theta_map = pd.Series(update, index=p).idxmax()
axes[i] = format_plot(axes, i, p, update, trials=days, success=up,
true_p=sp500_binary.mean(), tmle=theta_mle, tmap=theta_map)
title = 'Bayesian Probabilities: Updating the Posterior'
fig.suptitle(title, y=1.02, fontsize=14)
fig.tight_layout()
fig.savefig('figures/updating_sp500', dpi=300)
```
| github_jupyter |
## Hello!
Welcome to Demo-4, the following notebook will be showing you what you can do with 1stDayKit's multifarious ML-vision tools! Specifically, we will be looking at the following 1stDayKit submodules (all based on huggingface's transformer repo):
* Text Generator
* Text Summarizer
* Text Sentiment Analysis
* Text Question-Answering
* Text Translation (certain language-pairs only)
**Warning!**: The following demo notebook will trigger automatic downloading of heavy pretrained model weights.
Have fun!
---------------------
### 0. Importing Packages & Dependencies
```
#Import libs
from src.core.text_gen import TextGen_Base as TGL
from src.core.qa import QuesAns
from src.core.summarize import Summarizer
from src.core.sentiment import SentimentAnalyzer
from src.core.translate import Translator_M
from src.core.utils import utils
from PIL import Image
from pprint import pprint
import os
import matplotlib.pyplot as plt
import numpy
```
### 1. Simple Look at 1stDayKit NLP-Models
#### 1. Looking at Text Generation
Feel free to play around with all 4 variants of Text-Generator that we have provided in 1stDayKit. They are as follow in ascending order of computational demand:
* TextGen_Lite
* TextGen_Base
* TextGen_Large
* TextGen_XL
**Warning!**: If your machine does not meet the minimum computation requirement while running some of the larger models, it may crash!
```
#Initialization
textgen = TGL(name="Text Generator",max_length=16,num_return_sequences=3)
#Infer & Visualize
output = textgen.predict("Let me say")
textgen.visualize(output)
```
**Note**: <br>
Want to find out more on what GPT-2 (the model underlying 1stDayKit's TextGen modules) can do? Check out this cool blogpost on *poetry* generation with GPT-2 with some sweet examples! <br>
* Link: https://www.gwern.net/GPT-2
#### 2. Looking at Question & Answering
```
#Initialization
QA = QuesAns()
#Setup questions and answer and infer
context = """ Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the `run_squad.py`."""
question = "What is extractive question answering?"
question_answer = {'question':question,'context':context}
#Infer and visualize
output = QA.predict(question_answer)
QA.visualize(output)
```
#### 3. Looking at Text Summarizer
```
#Initialize
SM = Summarizer()
#Setup text to summarize
main_text_to_summarize = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
#Infer
output = SM.predict(main_text_to_summarize)
SM.visualize(main_text_to_summarize,output)
```
**Note**:
* Please note that the summarizer is not perfect (as is with all ML models)! See that the model has wrongly concluded that Liana Barrientos got charged, whereby in fact the ruling on said charges was not available at the time of writing of the main text.
* However, this does not diminish significantly the fact that a summarizer as such would still be useful (and indeed much more accurate with further training) in many real-world applications.
#### 4. Looking at Text Sentiment Analyzer
```
#Initialize
ST = SentimentAnalyzer()
#Setup texts. Let's try a bunch of them.
main_text_to_analyze = ["The food is not too hot, which makes it just right.",
"The weather is not looking too good today",
"The sky is looking a bit gloomy, time to catch a nap!",
"This tastes good :D",
"Superheroes are mere child fantasies"]
#Infer
output = ST.predict(main_text_to_analyze)
ST.visualize(main_text_to_analyze,output)
```
**Note**: Interesting! See that there are gaps still at times in the language model when it comes to tricky statements.
#### 5. Looking at Text Translator
We will be using the **MarianMT** series of pre-trained language models available on HuggingFace. More info and documentation can be found at https://huggingface.co/transformers/model_doc/marian.html.
```
#Initialize
Trans = Translator_M(task='Helsinki-NLP/opus-mt-en-ROMANCE')
#Setup texts
text_to_translate = ['>>fr<< this is a sentence in english that we want to translate to french',
'>>pt<< This should go to portuguese',
'>>es<< And this to Spanish']
#Infer
output = Trans.predict(text_to_translate)
output
Trans.visualize(text_to_translate,output)
#Setup texts longer text!
text_to_translate = ['>>fr<< Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.']
#Infer
output = Trans.predict(text_to_translate)
output
```
**Google Translate from French to English**<br>
Liana Barrientos, 39, is charged with two counts of 'offering a false instrument for first degree deposition' In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. One thinks she's still married to four men. "
Not bad.
______________
### Thank You!
| github_jupyter |
# 9장. 캐글 마스터에게 배우기
*아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.*
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.org/github/rickiepark/handson-gb/blob/main/Chapter09/Kaggle_Winners.ipynb"><img src="https://jupyter.org/assets/share.png" width="60" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/handson-gb/blob/main/Chapter09/Kaggle_Winners.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
```
# 노트북이 코랩에서 실행 중인지 체크합니다.
import sys
if 'google.colab' in sys.modules:
!pip install -q --upgrade xgboost
!pip install category_encoders
!wget -q https://raw.githubusercontent.com/rickiepark/handson-gb/main/Chapter09/cab_rides.csv
!wget -q https://raw.githubusercontent.com/rickiepark/handson-gb/main/Chapter09/weather.csv
# 경고 끄기
import warnings
warnings.filterwarnings('ignore')
import xgboost as xgb
xgb.set_config(verbosity=0)
```
## 특성공학
### 우버와 리프트 데이터
```
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_score
from xgboost import XGBClassifier, XGBRFClassifier
from sklearn.ensemble import RandomForestClassifier, StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import accuracy_score
from sklearn.ensemble import VotingClassifier
# Silence warnings
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
df = pd.read_csv('cab_rides.csv', nrows=10000)
df.head()
```
#### 누락된 값
```
df.info()
df[df.isna().any(axis=1)]
df.dropna(inplace=True)
```
#### 특성 공학 - 타임스탬프 데이터
```
df['date'] = pd.to_datetime(df['time_stamp'])
df.head()
pd.to_datetime(df['time_stamp'], unit='ms')
df['date'] = pd.to_datetime(df['time_stamp']*(10**6))
df.head()
import datetime as dt
df['month'] = df['date'].dt.month
df['hour'] = df['date'].dt.hour
df['dayofweek'] = df['date'].dt.dayofweek
def weekend(row):
if row['dayofweek'] in [5,6]:
return 1
else:
return 0
df['weekend'] = df.apply(weekend, axis=1)
def rush_hour(row):
if (row['hour'] in [6,7,8,9,15,16,17,18]) & (row['weekend'] == 0):
return 1
else:
return 0
df['rush_hour'] = df.apply(rush_hour, axis=1)
df.tail()
```
#### 특성 공학 - 범주형 데이터
##### 빈도 특성 만들기
```
df['cab_type'].value_counts()
df['cab_freq'] = df.groupby('cab_type')['cab_type'].transform('count')
df['cab_freq'] = df['cab_freq']/len(df)
df.tail()
```
#### 캐글 팁 - 평균 인코딩
```
from category_encoders.target_encoder import TargetEncoder
encoder = TargetEncoder()
df['cab_type_mean'] = encoder.fit_transform(df['cab_type'], df['price'])
df.tail()
```
## 상관관계가 낮은 앙상블 만들기
### 다양한 모델
```
from sklearn.datasets import load_breast_cancer
X, y = load_breast_cancer(return_X_y=True)
kfold = StratifiedKFold(n_splits=5)
from sklearn.model_selection import cross_val_score
def classification_model(model):
# 5폴드 교차 검증을 수행합니다.
scores = cross_val_score(model, X, y, cv=kfold)
# 평균 점수를 반환합니다.
return scores.mean()
classification_model(XGBClassifier())
classification_model(XGBClassifier(booster='gblinear'))
classification_model(XGBClassifier(booster='dart', one_drop=True))
classification_model(RandomForestClassifier(random_state=2))
classification_model(LogisticRegression(max_iter=10000))
classification_model(XGBClassifier(n_estimators=500, max_depth=2,
learning_rate=0.1))
```
### 앙상블의 상관관계
```
def y_pred(model):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = accuracy_score(y_pred, y_test)
print(score)
return y_pred
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
y_pred_gbtree = y_pred(XGBClassifier())
y_pred_dart = y_pred(XGBClassifier(booster='dart', one_drop=True))
y_pred_forest = y_pred(RandomForestClassifier(random_state=2))
y_pred_logistic = y_pred(LogisticRegression(max_iter=10000))
y_pred_xgb = y_pred(XGBClassifier(max_depth=2, n_estimators=500, learning_rate=0.1))
df_pred = pd.DataFrame(data= np.c_[y_pred_gbtree, y_pred_dart,
y_pred_forest, y_pred_logistic, y_pred_xgb],
columns=['gbtree', 'dart', 'forest', 'logistic', 'xgb'])
df_pred.corr()
```
### VotingClassifier
```
estimators = []
logistic_model = LogisticRegression(max_iter=10000)
estimators.append(('logistic', logistic_model))
xgb_model = XGBClassifier(max_depth=2, n_estimators=500, learning_rate=0.1)
estimators.append(('xgb', xgb_model))
rf_model = RandomForestClassifier(random_state=2)
estimators.append(('rf', rf_model))
ensemble = VotingClassifier(estimators)
scores = cross_val_score(ensemble, X, y, cv=kfold)
print(scores.mean())
```
## 스태킹
### StackingClassifier
```
base_models = []
base_models.append(('lr', LogisticRegression()))
base_models.append(('xgb', XGBClassifier()))
base_models.append(('rf', RandomForestClassifier(random_state=2)))
# 메타 모델을 정의합니다.
meta_model = LogisticRegression()
# 스태킹 앙상블을 만듭니다.
clf = StackingClassifier(estimators=base_models, final_estimator=meta_model)
scores = cross_val_score(clf, X, y, cv=kfold)
print(scores.mean())
```
| github_jupyter |
```
%matplotlib inline
```
Translation with a Sequence to Sequence Network and Attention
*************************************************************
**Author**: `Sean Robertson <https://github.com/spro/practical-pytorch>`_
In this project we will be teaching a neural network to translate from
French to English.
::
[KEY: > input, = target, < output]
> il est en train de peindre un tableau .
= he is painting a picture .
< he is painting a picture .
> pourquoi ne pas essayer ce vin delicieux ?
= why not try that delicious wine ?
< why not try that delicious wine ?
> elle n est pas poete mais romanciere .
= she is not a poet but a novelist .
< she not not a poet but a novelist .
> vous etes trop maigre .
= you re too skinny .
< you re all alone .
... to varying degrees of success.
This is made possible by the simple but powerful idea of the `sequence
to sequence network <http://arxiv.org/abs/1409.3215>`__, in which two
recurrent neural networks work together to transform one sequence to
another. An encoder network condenses an input sequence into a vector,
and a decoder network unfolds that vector into a new sequence.
.. figure:: /_static/img/seq-seq-images/seq2seq.png
:alt:
To improve upon this model we'll use an `attention
mechanism <https://arxiv.org/abs/1409.0473>`__, which lets the decoder
learn to focus over a specific range of the input sequence.
**Recommended Reading:**
I assume you have at least installed PyTorch, know Python, and
understand Tensors:
- http://pytorch.org/ For installation instructions
- :doc:`/beginner/deep_learning_60min_blitz` to get started with PyTorch in general
- :doc:`/beginner/pytorch_with_examples` for a wide and deep overview
- :doc:`/beginner/former_torchies_tutorial` if you are former Lua Torch user
It would also be useful to know about Sequence to Sequence networks and
how they work:
- `Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation <http://arxiv.org/abs/1406.1078>`__
- `Sequence to Sequence Learning with Neural
Networks <http://arxiv.org/abs/1409.3215>`__
- `Neural Machine Translation by Jointly Learning to Align and
Translate <https://arxiv.org/abs/1409.0473>`__
- `A Neural Conversational Model <http://arxiv.org/abs/1506.05869>`__
You will also find the previous tutorials on
:doc:`/intermediate/char_rnn_classification_tutorial`
and :doc:`/intermediate/char_rnn_generation_tutorial`
helpful as those concepts are very similar to the Encoder and Decoder
models, respectively.
And for more, read the papers that introduced these topics:
- `Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation <http://arxiv.org/abs/1406.1078>`__
- `Sequence to Sequence Learning with Neural
Networks <http://arxiv.org/abs/1409.3215>`__
- `Neural Machine Translation by Jointly Learning to Align and
Translate <https://arxiv.org/abs/1409.0473>`__
- `A Neural Conversational Model <http://arxiv.org/abs/1506.05869>`__
**Requirements**
```
from __future__ import unicode_literals, print_function, division
from io import open
import unicodedata
import string
import re
import random
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
use_cuda = torch.cuda.is_available()
```
Loading data files
==================
The data for this project is a set of many thousands of English to
French translation pairs.
`This question on Open Data Stack
Exchange <http://opendata.stackexchange.com/questions/3888/dataset-of-sentences-translated-into-many-languages>`__
pointed me to the open translation site http://tatoeba.org/ which has
downloads available at http://tatoeba.org/eng/downloads - and better
yet, someone did the extra work of splitting language pairs into
individual text files here: http://www.manythings.org/anki/
The English to French pairs are too big to include in the repo, so
download to ``data/eng-fra.txt`` before continuing. The file is a tab
separated list of translation pairs:
::
I am cold. Je suis froid.
.. Note::
Download the data from
`here <https://download.pytorch.org/tutorial/data.zip>`_
and extract it to the current directory.
Similar to the character encoding used in the character-level RNN
tutorials, we will be representing each word in a language as a one-hot
vector, or giant vector of zeros except for a single one (at the index
of the word). Compared to the dozens of characters that might exist in a
language, there are many many more words, so the encoding vector is much
larger. We will however cheat a bit and trim the data to only use a few
thousand words per language.
.. figure:: /_static/img/seq-seq-images/word-encoding.png
:alt:
We'll need a unique index per word to use as the inputs and targets of
the networks later. To keep track of all this we will use a helper class
called ``Lang`` which has word → index (``word2index``) and index → word
(``index2word``) dictionaries, as well as a count of each word
``word2count`` to use to later replace rare words.
```
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
```
The files are all in Unicode, to simplify we will turn Unicode
characters to ASCII, make everything lowercase, and trim most
punctuation.
```
# Turn a Unicode string to plain ASCII, thanks to
# http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
```
To read the data file we will split the file into lines, and then split
lines into pairs. The files are all English → Other Language, so if we
want to translate from Other Language → English I added the ``reverse``
flag to reverse the pairs.
```
def readLangs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
```
Since there are a *lot* of example sentences and we want to train
something quickly, we'll trim the data set to only relatively short and
simple sentences. Here the maximum length is 10 words (that includes
ending punctuation) and we're filtering to sentences that translate to
the form "I am" or "He is" etc. (accounting for apostrophes replaced
earlier).
```
MAX_LENGTH = 10
eng_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s",
"you are", "you re ",
"we are", "we re ",
"they are", "they re "
)
def filterPair(p):
return len(p[0].split(' ')) < MAX_LENGTH and \
len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(eng_prefixes)
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
```
The full process for preparing the data is:
- Read text file and split into lines, split lines into pairs
- Normalize text, filter by length and content
- Make word lists from sentences in pairs
```
def prepareData(lang1, lang2, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepareData('eng', 'fra', True)
print(random.choice(pairs))
```
The Seq2Seq Model
=================
A Recurrent Neural Network, or RNN, is a network that operates on a
sequence and uses its own output as input for subsequent steps.
A `Sequence to Sequence network <http://arxiv.org/abs/1409.3215>`__, or
seq2seq network, or `Encoder Decoder
network <https://arxiv.org/pdf/1406.1078v3.pdf>`__, is a model
consisting of two RNNs called the encoder and decoder. The encoder reads
an input sequence and outputs a single vector, and the decoder reads
that vector to produce an output sequence.
.. figure:: /_static/img/seq-seq-images/seq2seq.png
:alt:
Unlike sequence prediction with a single RNN, where every input
corresponds to an output, the seq2seq model frees us from sequence
length and order, which makes it ideal for translation between two
languages.
Consider the sentence "Je ne suis pas le chat noir" → "I am not the
black cat". Most of the words in the input sentence have a direct
translation in the output sentence, but are in slightly different
orders, e.g. "chat noir" and "black cat". Because of the "ne/pas"
construction there is also one more word in the input sentence. It would
be difficult to produce a correct translation directly from the sequence
of input words.
With a seq2seq model the encoder creates a single vector which, in the
ideal case, encodes the "meaning" of the input sequence into a single
vector — a single point in some N dimensional space of sentences.
The Encoder
-----------
The encoder of a seq2seq network is a RNN that outputs some value for
every word from the input sentence. For every input word the encoder
outputs a vector and a hidden state, and uses the hidden state for the
next input word.
.. figure:: /_static/img/seq-seq-images/encoder-network.png
:alt:
```
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
for i in range(self.n_layers):
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
result = Variable(torch.zeros(1, 1, self.hidden_size))
if use_cuda:
return result.cuda()
else:
return result
```
The Decoder
-----------
The decoder is another RNN that takes the encoder output vector(s) and
outputs a sequence of words to create the translation.
Simple Decoder
^^^^^^^^^^^^^^
In the simplest seq2seq decoder we use only last output of the encoder.
This last output is sometimes called the *context vector* as it encodes
context from the entire sequence. This context vector is used as the
initial hidden state of the decoder.
At every step of decoding, the decoder is given an input token and
hidden state. The initial input token is the start-of-string ``<SOS>``
token, and the first hidden state is the context vector (the encoder's
last hidden state).
.. figure:: /_static/img/seq-seq-images/decoder-network.png
:alt:
```
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1):
super(DecoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, input, hidden):
output = self.embedding(input).view(1, 1, -1)
for i in range(self.n_layers):
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.softmax(self.out(output[0]))
return output, hidden
def initHidden(self):
result = Variable(torch.zeros(1, 1, self.hidden_size))
if use_cuda:
return result.cuda()
else:
return result
```
I encourage you to train and observe the results of this model, but to
save space we'll be going straight for the gold and introducing the
Attention Mechanism.
Attention Decoder
^^^^^^^^^^^^^^^^^
If only the context vector is passed betweeen the encoder and decoder,
that single vector carries the burden of encoding the entire sentence.
Attention allows the decoder network to "focus" on a different part of
the encoder's outputs for every step of the decoder's own outputs. First
we calculate a set of *attention weights*. These will be multiplied by
the encoder output vectors to create a weighted combination. The result
(called ``attn_applied`` in the code) should contain information about
that specific part of the input sequence, and thus help the decoder
choose the right output words.
.. figure:: https://i.imgur.com/1152PYf.png
:alt:
Calculating the attention weights is done with another feed-forward
layer ``attn``, using the decoder's input and hidden state as inputs.
Because there are sentences of all sizes in the training data, to
actually create and train this layer we have to choose a maximum
sentence length (input length, for encoder outputs) that it can apply
to. Sentences of the maximum length will use all the attention weights,
while shorter sentences will only use the first few.
.. figure:: /_static/img/seq-seq-images/attention-decoder-network.png
:alt:
```
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_output, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)))
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
for i in range(self.n_layers):
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]))
return output, hidden, attn_weights
def initHidden(self):
result = Variable(torch.zeros(1, 1, self.hidden_size))
if use_cuda:
return result.cuda()
else:
return result
```
<div class="alert alert-info"><h4>Note</h4><p>There are other forms of attention that work around the length
limitation by using a relative position approach. Read about "local
attention" in `Effective Approaches to Attention-based Neural Machine
Translation <https://arxiv.org/abs/1508.04025>`__.</p></div>
Training
========
Preparing Training Data
-----------------------
To train, for each pair we will need an input tensor (indexes of the
words in the input sentence) and target tensor (indexes of the words in
the target sentence). While creating these vectors we will append the
EOS token to both sequences.
```
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
def variableFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
indexes.append(EOS_token)
result = Variable(torch.LongTensor(indexes).view(-1, 1))
if use_cuda:
return result.cuda()
else:
return result
def variablesFromPair(pair):
input_variable = variableFromSentence(input_lang, pair[0])
target_variable = variableFromSentence(output_lang, pair[1])
return (input_variable, target_variable)
```
Training the Model
------------------
To train we run the input sentence through the encoder, and keep track
of every output and the latest hidden state. Then the decoder is given
the ``<SOS>`` token as its first input, and the last hidden state of the
encoder as its first hidden state.
"Teacher forcing" is the concept of using the real target outputs as
each next input, instead of using the decoder's guess as the next input.
Using teacher forcing causes it to converge faster but `when the trained
network is exploited, it may exhibit
instability <http://minds.jacobs-university.de/sites/default/files/uploads/papers/ESNTutorialRev.pdf>`__.
You can observe outputs of teacher-forced networks that read with
coherent grammar but wander far from the correct translation -
intuitively it has learned to represent the output grammar and can "pick
up" the meaning once the teacher tells it the first few words, but it
has not properly learned how to create the sentence from the translation
in the first place.
Because of the freedom PyTorch's autograd gives us, we can randomly
choose to use teacher forcing or not with a simple if statement. Turn
``teacher_forcing_ratio`` up to use more of it.
```
teacher_forcing_ratio = 0.5
def train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_variable.size()[0]
target_length = target_variable.size()[0]
encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size))
encoder_outputs = encoder_outputs.cuda() if use_cuda else encoder_outputs
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
decoder_input = Variable(torch.LongTensor([[SOS_token]]))
decoder_input = decoder_input.cuda() if use_cuda else decoder_input
decoder_hidden = encoder_hidden
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
if use_teacher_forcing:
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
loss += criterion(decoder_output, target_variable[di])
decoder_input = target_variable[di] # Teacher forcing
else:
# Without teacher forcing: use its own predictions as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
decoder_input = Variable(torch.LongTensor([[ni]]))
decoder_input = decoder_input.cuda() if use_cuda else decoder_input
loss += criterion(decoder_output, target_variable[di])
if ni == EOS_token:
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
```
This is a helper function to print time elapsed and estimated time
remaining given the current time and progress %.
```
import time
import math
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
```
The whole training process looks like this:
- Start a timer
- Initialize optimizers and criterion
- Create set of training pairs
- Start empty losses array for plotting
Then we call ``train`` many times and occasionally print the progress (%
of examples, time so far, estimated time) and average loss.
```
def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01):
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
training_pairs = [variablesFromPair(random.choice(pairs))
for i in range(n_iters)]
criterion = nn.NLLLoss()
for iter in range(1, n_iters + 1):
training_pair = training_pairs[iter - 1]
input_variable = training_pair[0]
target_variable = training_pair[1]
loss = train(input_variable, target_variable, encoder,
decoder, encoder_optimizer, decoder_optimizer, criterion)
print_loss_total += loss
plot_loss_total += loss
if iter % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters),
iter, iter / n_iters * 100, print_loss_avg))
if iter % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
showPlot(plot_losses)
```
Plotting results
----------------
Plotting is done with matplotlib, using the array of loss values
``plot_losses`` saved while training.
```
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
def showPlot(points):
plt.figure()
fig, ax = plt.subplots()
# this locator puts ticks at regular intervals
loc = ticker.MultipleLocator(base=0.2)
ax.yaxis.set_major_locator(loc)
plt.plot(points)
```
Evaluation
==========
Evaluation is mostly the same as training, but there are no targets so
we simply feed the decoder's predictions back to itself for each step.
Every time it predicts a word we add it to the output string, and if it
predicts the EOS token we stop there. We also store the decoder's
attention outputs for display later.
```
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):
input_variable = variableFromSentence(input_lang, sentence)
input_length = input_variable.size()[0]
encoder_hidden = encoder.initHidden()
encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size))
encoder_outputs = encoder_outputs.cuda() if use_cuda else encoder_outputs
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_variable[ei],
encoder_hidden)
encoder_outputs[ei] = encoder_outputs[ei] + encoder_output[0][0]
decoder_input = Variable(torch.LongTensor([[SOS_token]])) # SOS
decoder_input = decoder_input.cuda() if use_cuda else decoder_input
decoder_hidden = encoder_hidden
decoded_words = []
decoder_attentions = torch.zeros(max_length, max_length)
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
decoder_attentions[di] = decoder_attention.data
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[ni])
decoder_input = Variable(torch.LongTensor([[ni]]))
decoder_input = decoder_input.cuda() if use_cuda else decoder_input
return decoded_words, decoder_attentions[:di + 1]
```
We can evaluate random sentences from the training set and print out the
input, target, and output to make some subjective quality judgements:
```
def evaluateRandomly(encoder, decoder, n=10):
for i in range(n):
pair = random.choice(pairs)
print('>', pair[0])
print('=', pair[1])
output_words, attentions = evaluate(encoder, decoder, pair[0])
output_sentence = ' '.join(output_words)
print('<', output_sentence)
print('')
```
Training and Evaluating
=======================
With all these helper functions in place (it looks like extra work, but
it's easier to run multiple experiments easier) we can actually
initialize a network and start training.
Remember that the input sentences were heavily filtered. For this small
dataset we can use relatively small networks of 256 hidden nodes and a
single GRU layer. After about 40 minutes on a MacBook CPU we'll get some
reasonable results.
.. Note::
If you run this notebook you can train, interrupt the kernel,
evaluate, and continue training later. Comment out the lines where the
encoder and decoder are initialized and run ``trainIters`` again.
```
hidden_size = 256
encoder1 = EncoderRNN(input_lang.n_words, hidden_size)
attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words,
1, dropout_p=0.1)
if use_cuda:
encoder1 = encoder1.cuda()
attn_decoder1 = attn_decoder1.cuda()
trainIters(encoder1, attn_decoder1, 75000, print_every=5000)
evaluateRandomly(encoder1, attn_decoder1)
```
Visualizing Attention
---------------------
A useful property of the attention mechanism is its highly interpretable
outputs. Because it is used to weight specific encoder outputs of the
input sequence, we can imagine looking where the network is focused most
at each time step.
You could simply run ``plt.matshow(attentions)`` to see attention output
displayed as a matrix, with the columns being input steps and rows being
output steps:
```
output_words, attentions = evaluate(
encoder1, attn_decoder1, "je suis trop froid .")
plt.matshow(attentions.numpy())
```
For a better viewing experience we will do the extra work of adding axes
and labels:
```
def showAttention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') +
['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def evaluateAndShowAttention(input_sentence):
output_words, attentions = evaluate(
encoder1, attn_decoder1, input_sentence)
print('input =', input_sentence)
print('output =', ' '.join(output_words))
showAttention(input_sentence, output_words, attentions)
evaluateAndShowAttention("elle a cinq ans de moins que moi .")
evaluateAndShowAttention("elle est trop petit .")
evaluateAndShowAttention("je ne crains pas de mourir .")
evaluateAndShowAttention("c est un jeune directeur plein de talent .")
```
Exercises
=========
- Try with a different dataset
- Another language pair
- Human → Machine (e.g. IOT commands)
- Chat → Response
- Question → Answer
- Replace the embeddings with pre-trained word embeddings such as word2vec or
GloVe
- Try with more layers, more hidden units, and more sentences. Compare
the training time and results.
- If you use a translation file where pairs have two of the same phrase
(``I am test \t I am test``), you can use this as an autoencoder. Try
this:
- Train as an autoencoder
- Save only the Encoder network
- Train a new Decoder for translation from there
| github_jupyter |
# Inspecting ModelSelectorResult
When we go down from multiple time-series to single time-series, the best way how to get access to all relevant information to use/access `ModelSelectorResult` objects
```
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = [12, 6]
from hcrystalball.model_selection import ModelSelector
from hcrystalball.utils import get_sales_data
from hcrystalball.wrappers import get_sklearn_wrapper
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
df = get_sales_data(n_dates=365*2,
n_assortments=1,
n_states=1,
n_stores=2)
df.head()
# let's start simple
df_minimal = df[['Sales']]
ms_minimal = ModelSelector(frequency='D', horizon=10)
ms_minimal.create_gridsearch(
n_splits=2,
between_split_lag=None,
sklearn_models=False,
sklearn_models_optimize_for_horizon=False,
autosarimax_models=False,
prophet_models=False,
tbats_models=False,
exp_smooth_models=False,
average_ensembles=False,
stacking_ensembles=False)
ms_minimal.add_model_to_gridsearch(get_sklearn_wrapper(LinearRegression))
ms_minimal.add_model_to_gridsearch(get_sklearn_wrapper(RandomForestRegressor, random_state=42))
ms_minimal.select_model(df=df_minimal, target_col_name='Sales')
```
## Ways to access ModelSelectorResult
There are three ways how you can get to single time-series result level.
- First is over `.results[i]`, which is fast, but does not ensure, that results are loaded in the same order as when they were created (reason for that is hash used in the name of each result, that are later read in alphabetic order)
- Second and third uses `.get_result_for_partition()` through `dict` based partition
- Forth does that using `partition_hash` (also in results file name if persisted)
```
result = ms_minimal.results[0]
result = ms_minimal.get_result_for_partition({'no_partition_label': ''})
result = ms_minimal.get_result_for_partition(ms_minimal.partitions[0])
result = ms_minimal.get_result_for_partition('fb452abd91f5c3bcb8afa4162c6452c2')
```
## ModelSelectorResult is rich
As you can see below, we try to store all relevant information to enable easy access to data, that is otherwise very lenghty.
```
result
```
### Traning data
```
result.X_train
result.y_train
```
### Data behind plots
Ready to be plotted or adjusted to your needs
```
result.df_plot
result.df_plot.tail(50).plot();
result
```
## Best Model Metadata
That can help to filter for example `cv_data` or to get a glimpse on which parameters the best model has
```
result.best_model_hash
result.best_model_name
result.best_model_repr
```
### CV Results
Get information about how our model behaved in cross validation
```
result.best_model_cv_results['mean_fit_time']
```
Or how all the models behaved
```
result.cv_results.sort_values('rank_test_score').head()
```
### CV Data
Access predictions made during cross validation with possible cv splits and true target values
```
result.cv_data.head()
result.cv_data.drop(['split'], axis=1).plot();
result.best_model_cv_data.head()
result.best_model_cv_data.plot();
```
## Plotting Functions
With `**plot_params` that you can pass depending on your plotting backend
```
result.plot_result(plot_from='2015-06', title='Performance', color=['blue','green']);
result.plot_error(title='Error');
```
## Convenient Persist Method
```
result.persist?
```
| github_jupyter |
# Python API to EasyForm
```
from beakerx import *
f = EasyForm("Form and Run")
f.addTextField("first")
f['first'] = "First"
f.addTextField("last")
f['last'] = "Last"
f.addButton("Go!", tag="run")
f
```
You can access the values from the form by treating it as an array indexed on the field names:
```
"Good morning " + f["first"] + " " + f["last"]
f['last'][::-1] + '...' + f['first']
```
The array works both ways, so you set default values on the fields by writing the array:
```
f['first'] = 'Beaker'
f['last'] = 'Berzelius'
```
## Event Handlers for Smarter Forms
You can use `onInit` and `onChange` to handle component events. For button events use `actionPerformed` or `addAction`.
```
import operator
f1 = EasyForm("OnInit and OnChange")
f1.addTextField("first", width=15)
f1.addTextField("last", width=15)\
.onInit(lambda: operator.setitem(f1, 'last', "setinit1"))\
.onChange(lambda text: operator.setitem(f1, 'first', text + ' extra'))
button = f1.addButton("action", tag="action_button")
button.actionPerformed = lambda: operator.setitem(f1, 'last', 'action done')
f1
f1['last'] + ", " + f1['first']
f1['last'] = 'new Value'
f1['first'] = 'new Value2'
```
## All Kinds of Fields
```
g = EasyForm("Field Types")
g.addTextField("Short Text Field", width=10)
g.addTextField("Text Field")
g.addPasswordField("Password Field", width=10)
g.addTextArea("Text Area")
g.addTextArea("Tall Text Area", 10, 5)
g.addCheckBox("Check Box")
options = ["a", "b", "c", "d"]
g.addComboBox("Combo Box", options)
g.addComboBox("Combo Box editable", options, editable=True)
g.addList("List", options)
g.addList("List Single", options, multi=False)
g.addList("List Two Row", options, rows=2)
g.addCheckBoxes("Check Boxes", options)
g.addCheckBoxes("Check Boxes H", options, orientation=EasyForm.HORIZONTAL)
g.addRadioButtons("Radio Buttons", options)
g.addRadioButtons("Radio Buttons H", options, orientation=EasyForm.HORIZONTAL)
g.addDatePicker("Date")
g.addButton("Go!", tag="run2")
g
result = dict()
for child in g:
result[child] = g[child]
TableDisplay(result)
```
### Dates
```
gdp = EasyForm("Field Types")
gdp.addDatePicker("Date")
gdp
gdp['Date']
```
### SetData
```
easyForm = EasyForm("Field Types")
easyForm.addDatePicker("Date", value=datetime.today().strftime('%Y%m%d'))
easyForm
```
### Default Values and placeholder
```
h = EasyForm("Default Values")
h.addTextArea("Default Value", value = "Initial value")
h.addTextArea("Place Holder", placeholder = "Put here some text")
h.addCheckBox("Default Checked", value = True)
h.addButton("Press", tag="check")
h
result = dict()
for child in h:
result[child] = h[child]
TableDisplay(result)
```
## JupyterJSWidgets work with EasyForm
The widgets from JupyterJSWidgets are compatible and can appear in forms.
```
from ipywidgets import *
w = IntSlider()
widgetForm = EasyForm("python widgets")
widgetForm.addWidget("IntSlider", w)
widgetForm.addButton("Press", tag="widget_test")
widgetForm
widgetForm['IntSlider']
```
| github_jupyter |
# ResNet CIFAR-10 with tensorboard
This notebook shows how to use TensorBoard, and how the training job writes checkpoints to a external bucket.
The model used for this notebook is a ResNet model, trained with the CIFAR-10 dataset.
See the following papers for more background:
[Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Dec 2015.
[Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Jul 2016.
### Set up the environment
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
### Download the CIFAR-10 dataset
Downloading the test and training data will take around 5 minutes.
```
import utils
utils.cifar10_download()
```
### Upload the data to a S3 bucket
```
inputs = sagemaker_session.upload_data(path='/tmp/cifar10_data', key_prefix='data/DEMO-cifar10')
```
**sagemaker_session.upload_data** will upload the CIFAR-10 dataset from your machine to a bucket named **sagemaker-{region}-{*your aws account number*}**, if you don't have this bucket yet, sagemaker_session will create it for you.
### Complete source code
- [source_dir/resnet_model.py](source_dir/resnet_model.py): ResNet model
- [source_dir/resnet_cifar_10.py](source_dir/resnet_cifar_10.py): main script used for training and hosting
## Create a training job using the sagemaker.TensorFlow estimator
```
from sagemaker.tensorflow import TensorFlow
source_dir = os.path.join(os.getcwd(), 'source_dir')
estimator = TensorFlow(entry_point='resnet_cifar_10.py',
source_dir=source_dir,
role=role,
hyperparameters={'throttle_secs': 30},
training_steps=1000, evaluation_steps=100,
train_instance_count=2, train_instance_type='ml.c4.xlarge',
base_job_name='tensorboard-example')
estimator.fit(inputs, run_tensorboard_locally=True)
```
The **```fit```** method will create a training job named **```tensorboard-example-{unique identifier}```** in two **ml.c4.xlarge** instances. These instances will write checkpoints to the s3 bucket **```sagemaker-{your aws account number}```**.
If you don't have this bucket yet, **```sagemaker_session```** will create it for you. These checkpoints can be used for restoring the training job, and to analyze training job metrics using **TensorBoard**.
The parameter **```run_tensorboard_locally=True```** will run **TensorBoard** in the machine that this notebook is running. Everytime a new checkpoint is created by the training job in the S3 bucket, **```fit```** will download the checkpoint to the temp folder that **TensorBoard** is pointing to.
When the **```fit```** method starts the training, it will log the port that **TensorBoard** is using to display the metrics. The default port is **6006**, but another port can be choosen depending on its availability. The port number will increase until finds an available port. After that the port number will printed in stdout.
It takes a few minutes to provision containers and start the training job.**TensorBoard** will start to display metrics shortly after that.
You can access **TensorBoard** locally at [http://localhost:6006](http://localhost:6006) or using your SageMaker notebook instance [proxy/6006/](/proxy/6006/)(TensorBoard will not work if forget to put the slash, '/', in end of the url). If TensorBoard started on a different port, adjust these URLs to match.This example uses the optional hyperparameter **```throttle_secs```** to generate training evaluations more often, allowing to visualize **TensorBoard** scalar data faster. You can find the available optional hyperparameters [here](https://github.com/aws/sagemaker-python-sdk#optional-hyperparameters).
# Deploy the trained model to prepare for predictions
The deploy() method creates an endpoint which serves prediction requests in real-time.
```
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
# Make a prediction with fake data to verify the endpoint is up
Prediction is not the focus of this notebook, so to verify the endpoint's functionality, we'll simply generate random data in the correct shape and make a prediction.
```
import numpy as np
random_image_data = np.random.rand(32, 32, 3)
predictor.predict(random_image_data)
```
# Cleaning up
To avoid incurring charges to your AWS account for the resources used in this tutorial you need to delete the **SageMaker Endpoint:**
```
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Sterls/colabs/blob/master/knn_demo_colab_0_8.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# KNN #
K NearestNeighbors is a unsupervised algorithm where if one wants to find the “closest” datapoint(s) to new unseen data, one can calculate a suitable “distance” between each and every point, and return the top K datapoints which have the smallest distance to it.
cuML’s KNN expects a cuDF DataFrame or a Numpy Array (where automatic chunking will be done in to a Numpy Array in a future release), and fits a special data structure first to approximate the distance calculations, allowing our querying times to be O(plogn) and not the brute force O(np) [where p = no(features)]:
The KNN function accepts the following parameters:
1. n_neighbors: int (default = 5). The top K closest datapoints you want the algorithm to return. If this number is large, then expect the algorithm to run slower.
2. should_downcast:bool (default = False). Currently only single precision is supported in the underlying undex. Setting this to true will allow single-precision input arrays to be automatically downcasted to single precision. Default = False.
The methods that can be used with KNN are:
1. fit: Fit GPU index for performing nearest neighbor queries.
2. kneighbors: Query the GPU index for the k nearest neighbors of row vectors in X.
The model accepts only numpy arrays or cudf dataframes as the input. In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the K NearestNeighbors model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/api.html#nearest-neighbors
#Setup:
1. Install most recent Miniconda release compatible with Google Colab's Python install (3.6.7)
2. Install RAPIDS libraries
3. Set necessary environment variables
4. Copy RAPIDS .so files into current working directory, a workaround for conda/colab interactions
```
# intall miniconda
!wget -c https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!bash ./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# install RAPIDS packages
!conda install -q -y --prefix /usr/local -c conda-forge \
-c rapidsai-nightly/label/cuda10.0 -c nvidia/label/cuda10.0 \
cudf cuml
# set environment vars
import sys, os, shutil
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
# copy .so files to current working dir
for fn in ['libcudf.so', 'librmm.so']:
shutil.copy('/usr/local/lib/'+fn, os.getcwd())
import numpy as np
import pandas as pd
import cudf
import os
from sklearn.neighbors import NearestNeighbors as skKNN
from cuml.neighbors.nearest_neighbors import NearestNeighbors as cumlKNN
```
#Helper Functions#
```
# check if the mortgage dataset is present and then extract the data from it, else just create a random dataset for clustering
import gzip
# change the path of the mortgage dataset if you have saved it in a different directory
def load_data(nrows, ncols, cached = 'data/mortgage.npy.gz',source='mortgage'):
if os.path.exists(cached) and source=='mortgage':
print('use mortgage data')
with gzip.open(cached) as f:
X = np.load(f)
X = X[np.random.randint(0,X.shape[0]-1,nrows),:ncols]
else:
# create a random dataset
print('use random data')
X = np.random.random((nrows,ncols)).astype('float32')
df = pd.DataFrame({'fea%d'%i:X[:,i] for i in range(X.shape[1])}).fillna(0)
return df
from sklearn.metrics import mean_squared_error
# this function checks if the results obtained from two different methods (sklearn and cuml) are the same
def array_equal(a,b,threshold=1e-3,with_sign=True,metric='mse'):
a = to_nparray(a)
b = to_nparray(b)
if with_sign == False:
a,b = np.abs(a),np.abs(b)
if metric=='mse':
error = mean_squared_error(a,b)
res = error<threshold
elif metric=='abs':
error = a-b
res = len(error[error>threshold]) == 0
elif metric == 'acc':
error = np.sum(a!=b)/(a.shape[0]*a.shape[1])
res = error<threshold
return res
# calculate the accuracy
def accuracy(a,b, threshold=1e-4):
a = to_nparray(a)
b = to_nparray(b)
c = a-b
c = len(c[c>1]) / (c.shape[0]*c.shape[1])
return c<threshold
# the function converts a variable from ndarray or dataframe format to numpy array
def to_nparray(x):
if isinstance(x,np.ndarray) or isinstance(x,pd.DataFrame):
return np.array(x)
elif isinstance(x,np.float64):
return np.array([x])
elif isinstance(x,cudf.DataFrame) or isinstance(x,cudf.Series):
return x.to_pandas().values
return x
```
#Run tests#
```
%%time
# nrows = number of samples
# ncols = number of features of each sample
nrows = 2**15
ncols = 40
X = load_data(nrows,ncols)
print('data',X.shape)
# the number of neighbors whos labels are to be checked
n_neighbors = 10
%%time
# use the sklearn KNN model to fit the dataset
knn_sk = skKNN(metric = 'sqeuclidean', )
knn_sk.fit(X)
D_sk,I_sk = knn_sk.kneighbors(X,n_neighbors)
%%time
# convert the pandas dataframe to cudf dataframe
X = cudf.DataFrame.from_pandas(X)
%%time
# use cuml's KNN model to fit the dataset
knn_cuml = cumlKNN()
knn_cuml.fit(X)
# calculate the distance and the indices of the samples present in the dataset
D_cuml,I_cuml = knn_cuml.kneighbors(X,n_neighbors)
# compare the distance obtained while using sklearn and cuml models
passed = array_equal(D_sk,D_cuml, metric='abs') # metric used can be 'acc', 'mse', or 'abs'
message = 'compare knn: cuml vs sklearn distances %s'%('equal'if passed else 'NOT equal')
print(message)
# compare the labels obtained while using sklearn and cuml models
passed = accuracy(I_sk, I_cuml, threshold=1e-1)
message = 'compare knn: cuml vs sklearn indexes %s'%('equal'if passed else 'NOT equal')
print(message)
```
| github_jupyter |
# Tutorial
Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1].
We will work through all the examples in the chapter as they unfold.
[1] [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2014.](https://www.otexts.org/fpp/7)
# Exponential smoothing
First we load some data. We have included the R data in the notebook for expedience.
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
data = [446.6565, 454.4733, 455.663 , 423.6322, 456.2713, 440.5881, 425.3325, 485.1494, 506.0482, 526.792 , 514.2689, 494.211 ]
index= pd.date_range(start='1996', end='2008', freq='A')
oildata = pd.Series(data, index)
data = [17.5534, 21.86 , 23.8866, 26.9293, 26.8885, 28.8314, 30.0751, 30.9535, 30.1857, 31.5797, 32.5776, 33.4774, 39.0216, 41.3864, 41.5966]
index= pd.date_range(start='1990', end='2005', freq='A')
air = pd.Series(data, index)
data = [263.9177, 268.3072, 260.6626, 266.6394, 277.5158, 283.834 , 290.309 , 292.4742, 300.8307, 309.2867, 318.3311, 329.3724, 338.884 , 339.2441, 328.6006, 314.2554, 314.4597, 321.4138, 329.7893, 346.3852, 352.2979, 348.3705, 417.5629, 417.1236, 417.7495, 412.2339, 411.9468, 394.6971, 401.4993, 408.2705, 414.2428]
index= pd.date_range(start='1970', end='2001', freq='A')
livestock2 = pd.Series(data, index)
data = [407.9979 , 403.4608, 413.8249, 428.105 , 445.3387, 452.9942, 455.7402]
index= pd.date_range(start='2001', end='2008', freq='A')
livestock3 = pd.Series(data, index)
data = [41.7275, 24.0418, 32.3281, 37.3287, 46.2132, 29.3463, 36.4829, 42.9777, 48.9015, 31.1802, 37.7179, 40.4202, 51.2069, 31.8872, 40.9783, 43.7725, 55.5586, 33.8509, 42.0764, 45.6423, 59.7668, 35.1919, 44.3197, 47.9137]
index= pd.date_range(start='2005', end='2010-Q4', freq='QS-OCT')
aust = pd.Series(data, index)
```
## Simple Exponential Smoothing
Lets use Simple Exponential Smoothing to forecast the below oil data.
```
ax=oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
plt.show()
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.")
```
Here we run three variants of simple exponential smoothing:
1. In ```fit1``` we do not use the auto optimization but instead choose to explicitly provide the model with the $\alpha=0.2$ parameter
2. In ```fit2``` as above we choose an $\alpha=0.6$
3. In ```fit3``` we allow statsmodels to automatically find an optimized $\alpha$ value for us. This is the recommended approach.
```
fit1 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.2,optimized=False)
fcast1 = fit1.forecast(3).rename(r'$\alpha=0.2$')
fit2 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.6,optimized=False)
fcast2 = fit2.forecast(3).rename(r'$\alpha=0.6$')
fit3 = SimpleExpSmoothing(oildata).fit()
fcast3 = fit3.forecast(3).rename(r'$\alpha=%s$'%fit3.model.params['smoothing_level'])
ax = oildata.plot(marker='o', color='black', figsize=(12,8))
fcast1.plot(marker='o', ax=ax, color='blue', legend=True)
fit1.fittedvalues.plot(marker='o', ax=ax, color='blue')
fcast2.plot(marker='o', ax=ax, color='red', legend=True)
fit2.fittedvalues.plot(marker='o', ax=ax, color='red')
fcast3.plot(marker='o', ax=ax, color='green', legend=True)
fit3.fittedvalues.plot(marker='o', ax=ax, color='green')
plt.show()
```
## Holt's Method
Lets take a look at another example.
This time we use air pollution data and the Holt's Method.
We will fit three examples again.
1. In ```fit1``` we again choose not to use the optimizer and provide explicit values for $\alpha=0.8$ and $\beta=0.2$
2. In ```fit2``` we do the same as in ```fit1``` but choose to use an exponential model rather than a Holt's additive model.
3. In ```fit3``` we used a damped versions of the Holt's additive model but allow the dampening parameter $\phi$ to be optimized while fixing the values for $\alpha=0.8$ and $\beta=0.2$
```
fit1 = Holt(air).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped=True).fit(smoothing_level=0.8, smoothing_slope=0.2)
fcast3 = fit3.forecast(5).rename("Additive damped trend")
ax = air.plot(color="black", marker="o", figsize=(12,8))
fit1.fittedvalues.plot(ax=ax, color='blue')
fcast1.plot(ax=ax, color='blue', marker="o", legend=True)
fit2.fittedvalues.plot(ax=ax, color='red')
fcast2.plot(ax=ax, color='red', marker="o", legend=True)
fit3.fittedvalues.plot(ax=ax, color='green')
fcast3.plot(ax=ax, color='green', marker="o", legend=True)
plt.show()
```
### Seasonally adjusted data
Lets look at some seasonally adjusted livestock data. We fit five Holt's models.
The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.
Note: ```fit4``` does not allow the parameter $\phi$ to be optimized by providing a fixed value of $\phi=0.98$
```
fit1 = SimpleExpSmoothing(livestock2).fit()
fit2 = Holt(livestock2).fit()
fit3 = Holt(livestock2,exponential=True).fit()
fit4 = Holt(livestock2,damped=True).fit(damping_slope=0.98)
fit5 = Holt(livestock2,exponential=True,damped=True).fit()
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'initial_level', 'initial_slope']
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$l_0$","$b_0$","SSE"] ,columns=['SES', "Holt's","Exponential", "Additive", "Multiplicative"])
results["SES"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] = [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results
```
### Plots of Seasonally Adjusted Data
The following plots allow us to evaluate the level and slope/trend components of the above table's fits.
```
for fit in [fit2,fit4]:
pd.DataFrame(np.c_[fit.level,fit.slope]).rename(
columns={0:'level',1:'slope'}).plot(subplots=True)
plt.show()
print('Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method.')
```
## Comparison
Here we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.
```
fit1 = SimpleExpSmoothing(livestock2).fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2).fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True).fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped=True).fit(damping_slope=0.98)
fcast4 = fit4.forecast(9).rename("Additive Damped")
fit5 = Holt(livestock2, exponential=True, damped=True).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")
ax = livestock2.plot(color="black", marker="o", figsize=(12,8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color='red', legend=True)
fcast2.plot(ax=ax, color='green', legend=True)
fcast3.plot(ax=ax, color='blue', legend=True)
fcast4.plot(ax=ax, color='cyan', legend=True)
fcast5.plot(ax=ax, color='magenta', legend=True)
ax.set_ylabel('Livestock, sheep in Asia (millions)')
plt.show()
print('Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.')
```
## Holt's Winters Seasonal
Finally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component.
statsmodels allows for all the combinations including as shown in the examples below:
1. ```fit1``` additive trend, additive seasonal of period ```season_length=4``` and the use of a Box-Cox transformation.
1. ```fit2``` additive trend, multiplicative seasonal of period ```season_length=4``` and the use of a Box-Cox transformation..
1. ```fit3``` additive damped trend, additive seasonal of period ```season_length=4``` and the use of a Box-Cox transformation.
1. ```fit4``` additive damped trend, multiplicative seasonal of period ```season_length=4``` and the use of a Box-Cox transformation.
The plot shows the results and forecast for ```fit1``` and ```fit2```.
The table allows us to compare the results and parameterizations.
```
fit1 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
fit2 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit(use_boxcox=True)
fit3 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add', damped=True).fit(use_boxcox=True)
fit4 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul', damped=True).fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]
ax = aust.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style='--', color='red')
fit2.fittedvalues.plot(ax=ax, style='--', color='green')
fit1.forecast(8).rename('Holt-Winters (add-add-seasonal)').plot(ax=ax, style='--', marker='o', color='red', legend=True)
fit2.forecast(8).rename('Holt-Winters (add-mul-seasonal)').plot(ax=ax, style='--', marker='o', color='green', legend=True)
plt.show()
print("Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.")
results
```
### The Internals
It is possible to get at the internals of the Exponential Smoothing models.
Here we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\hat{y}_t$.
```
df = pd.DataFrame(np.c_[aust, fit1.level, fit1.slope, fit1.season, fit1.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit1.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)
df = pd.DataFrame(np.c_[aust, fit2.level, fit2.slope, fit2.season, fit2.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index)
df.append(fit2.forecast(8).rename(r'$\hat{y}_t$').to_frame(), sort=True)
```
Finally lets look at the levels, slopes/trends and seasonal components of the models.
```
states1 = pd.DataFrame(np.c_[fit1.level, fit1.slope, fit1.season], columns=['level','slope','seasonal'], index=aust.index)
states2 = pd.DataFrame(np.c_[fit2.level, fit2.slope, fit2.season], columns=['level','slope','seasonal'], index=aust.index)
fig, [[ax1, ax4],[ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12,8))
states1[['level']].plot(ax=ax1)
states1[['slope']].plot(ax=ax2)
states1[['seasonal']].plot(ax=ax3)
states2[['level']].plot(ax=ax4)
states2[['slope']].plot(ax=ax5)
states2[['seasonal']].plot(ax=ax6)
plt.show()
```
| github_jupyter |
```
from IPython.display import display, Javascript, HTML
from datetime import datetime
from utils.notebooks import get_date_slider_from_datetime
from ipywidgets import Layout, interact, Output, widgets, fixed
from ipywidgets.widgets import Dropdown
%store -r the_page
%store -r agg_actions
#%store -r calculator
#%store -r editors_conflicts
%store -r sources
# if ('the_page' not in locals() or
# 'agg_actions' not in locals() or
# 'calculator' not in locals() or
# 'editors_conflicts' not in locals()):
# import pickle
# print("Loading default data...")
# the_page = pickle.load(open("data/the_page.p",'rb'))
# agg_actions = pickle.load(open("data/agg_actions.p",'rb'))
# calculator = pickle.load(open("data/calculator.p",'rb'))
# editors_conflicts = pickle.load(open("data/editors_conflicts.p",'rb'))
display(Javascript('IPython.notebook.execute_cells_below()'))
re_hide = """
<script>
var update_input_visibility = function () {
Jupyter.notebook.get_cells().forEach(function(cell) {
if (cell.metadata.hide_input) {
cell.element.find("div.input").hide();
}
})
};
update_input_visibility();
</script>
"""
display(HTML(re_hide))
scroll_to_top = """
<script>
document.getElementById('notebook').scrollIntoView();
</script>
"""
%%html
<style>
summary{
display:list-item;
}
.widget-radio-box{
flex-direction: row;
}
.widget-radio-box input{
margin:0 6px 0 5px
}
</style>
%%capture
%load_ext autoreload
%autoreload 2
```
### <span style="color:green"> Modules Imported </span>
```
## Modules Imported ##
# Display
from IPython.display import display, Markdown as md, clear_output
from datetime import date
import urllib
# APIs
from wikiwho_wrapper import WikiWho
from external.wikipedia import WikipediaDV, WikipediaAPI
from external.wikimedia import WikiMediaDV, WikiMediaAPI
from external.xtools import XtoolsAPI, XtoolsDV
# Data Processing
import pickle
import pandas as pd
# Visualization tools
import qgrid
import matplotlib.pyplot as plt
# Page views timeline
from visualization.views_listener import ViewsListener
# Change actions timeline
from visualization.actions_listener import ActionsListener
# Conflicts visualization
from visualization.conflicts_listener import ConflictsListener, ConflictsActionListener
from visualization.calculator_listener import ConflictCalculatorListener
# Word cloud visualization
from visualization.wordcloud_listener import WCListener, WCActionsListener
from visualization.wordclouder import WordClouder
# Wikipedia talk pages visualization
from visualization.talks_listener import TalksListener
from visualization.topics_listener import TopicsListener
# Tokens ownership visualization
from visualization.owned_listener import OwnedListener
# To remove stopwords
from visualization.editors_listener import remove_stopwords
# Metrics management
from metrics.conflict import ConflictManager
from metrics.token import TokensManager
# For language selection
from utils.lngselection import abbreviation, lng_listener
# Load the variables stored in the last notebook
%store -r the_page
%store -r total_actions
#%store -r conflict_calculator
#%store -r conflicts_by_editors
%store -r lng_selected
# Check them if in the namespace, otherwise load the default data.
# if ('the_page' not in locals() or
# 'total_actions' not in locals() or
# 'conflict_calculator' not in locals() or
# 'conflicts_by_editors' not in locals()):
# print("Loading default data...")
# the_page = pickle.load(open("data/the_page.p",'rb'))
# total_actions = pickle.load(open("data/agg_actions.p",'rb'))
# conflict_calculator = pickle.load(open("data/calculator.p",'rb'))
# conflicts_by_editors = pickle.load(open("data/editors_conflicts.p",'rb'))
```
# Focus on a selected editor
In this notebooks, we shift the focus to a particular editor. Instead of looking at a particular page an explore activity related to it, we select an editor that contributed to the page that has been analyzed, and explore their activity withing the page.
```
display(md(f"### ***Page: {the_page['title']} ({lng_selected.upper()})***"))
```
---
#### Troubleshooting:
- Allow some time for the notebook to run fully before interacting with the interface controls. For articles with a long revision history, this could take minutes and the interaction with the controls will be slow.
- **All cells should run automatically** when you open the notebook. If that is not the case, please just reload the tab in your browser.
- After chosing a new editor, please rerun the cells/modules you want to use.
- You should not see any code when you run it for the first time. If so, please let us know by posting an issue in our Github repository: https://github.com/gesiscss/IWAAN.
---
# A. Conflict score
To calculate the conflict score for an editor, we used the same metric of conflict that was used in the previous notebook.
## A.1 Antagonists - Which editors has been in conflict with others?
The table below presents the conflict score and other related metrics per editor idenfied by both, editor_id and editor username:
<br>
<details>
<summary style="cursor: pointer;font-weight:bold">Columns description</summary>
- **conflicts**: the total number of conflicts
- **elegibles**: the total number of elegible actions performed by the editor
- **conflict**: the sum of conflict scores of all actions divided by the number of elegible actions
</details>
You can interact with the table, and select an editor. Once you click on a row, the graph below will be updated to display the distribution of two different metrics (Y-axis) accross time (X-axis). The metrics displayed in the black and red traces can be selected in the controls of the plot.
<br>
<details>
<summary style="cursor: pointer;font-weight:bold">Description of available metrics</summary>
- **Conflict Score**: the sum of conflict scores of all actions divided by the number of elegible actions
- **Absolute Conflict Score**: the sum of conflict scores of all actions (without division)
- **Conflict Ratio**: the count of all conflicts divided by the number of elegible actions
- **Number of Conflicts**: the total number of conflicts
- **Total Elegible Actions**: the total number of elegible actions
</details>
```
# Build editors agg conflicts.
editors_conflicts = agg_actions.groupby(pd.Grouper(
key='editor_id')).agg({'conflicts': 'sum', 'elegibles': 'sum', 'conflict': 'sum'}).reset_index()
editors_conflicts["conflict"] = (editors_conflicts["conflict"] / editors_conflicts["elegibles"])
editors_conflicts = editors_conflicts[editors_conflicts["editor_id"] != 0]
editors_conflicts = agg_actions[['editor_id', 'editor']].drop_duplicates().merge(editors_conflicts.dropna(),
on='editor_id').set_index('editor_id').dropna()
def display_conflict_score(editor_df):
global listener
listener = ConflictsListener(editor_df, bargap=0.8)
metrics = ['Conflict Score', 'Absolute Conflict Score',
'Conflict Ratio', 'Number of Conflicts',
'Total Elegible Actions']
#display(md(f'*Total Page conflict score: {calculator.get_page_conflict_score()}*'))
display(md(f'*Total Page conflict score: {editor_df.conflict.sum() / editor_df.elegibles.sum()}*'))
# Visualization
interact(listener.listen,
#_range = get_date_slider_from_datetime(editor_df['year_month']),
_range1=widgets.DatePicker(description='Date starts', value=editor_df['rev_time'].iloc[0], layout=Layout(width='25%')),
_range2=widgets.DatePicker(description='Date ends', value=editor_df['rev_time'].iloc[-1], layout=Layout(width='25%')),
granularity=Dropdown(options=['Yearly', 'Monthly', 'Daily'], value='Daily'),
black=Dropdown(options=metrics, value='Conflict Score'),
red=Dropdown(options= ['None'] + metrics, value='None'))
def select_editor(editor):
global the_editor
global editor_inputname
editor_inputname=editor
wikipedia_dv = WikipediaDV(WikipediaAPI(lng=lng_selected))
try:
the_editor = wikipedia_dv.get_editor(int(editor_inputname))
except:
the_editor = wikipedia_dv.get_editor(editor_inputname[2:])
with out:
%store the_editor
%store editor_inputname
clear_output()
display(md("### Current Selection:"))
if 'invalid' in the_editor:
display(f"The editor {editor_inputname} was not found, try a different editor")
else:
# display the data that will be passed to the next notebook
url = f'{wikipedia_dv.api.base}action=query&list=users&ususerids={editor_inputname}&usprop=blockinfo|editcount|registration|gender&format=json'
print("Editor's metadata can be found in:")
print(url)
display(the_editor.to_frame('values'))
display(md(f"#### Evolution of the Conflict Score of *{the_editor['name']}*"))
editor_df = agg_actions[agg_actions['editor_id'] == the_editor['userid']].copy()
#editor_df = calculator.elegible_actions[
#calculator.elegible_actions['editor'] == editor_inputname].copy()
display_conflict_score(editor_df)
def on_selection_change(change):
try:
select_editor(qg_obj.get_selected_df().iloc[0].name)
except:
print('Problem parsing the name. Execute the cell again and try a different editor.')
qgrid.set_grid_option('maxVisibleRows', 5)
qg_obj = qgrid.show_grid(editors_conflicts)
qg_obj.observe(on_selection_change, names=['_selected_rows'])
display(md("### Select one editor (row) to continue:"))
display(md('**Recommendation:** select an editor with *many conflicts* and *mid-high conflict score*'))
display(qg_obj)
out = Output()
display(out)
# select an editor that does not contain 0| at the beginning
for ed in editors_conflicts.index:
if ed != 0:
select_editor(ed)
break
```
<span style="color: #626262"> Try yourself! This is what will happen when you select an editor: </span>
```
%%script false --no-raise-error
### IMPORTANT NOTE: COMMENT THE ABOVE LINE TO EXECUTE THE CELL ###
### ----------------------------------------------------------------- ###
### TRY YOURSELF! THIS IS WHAT WILL HAPPEN WHEN YOU SELECT AN EDITOR ###
### ----------------------------------------------------------------- ###
## This is the page you used ##
print('The page that is being used:', the_page['title'], f'({lng_selected.upper()})')
## Use the variable from the last notebook: conflicts_by_editors (pd.DataFrame) ##
## Display the dataframe using interactive grid, you could learn more through the doc: ##
## https://qgrid.readthedocs.io/en/latest/ ##
qgrid.set_grid_option('maxVisibleRows', 5) # Set max visible rows for the grid.
qgrid_init = qgrid.show_grid(editors_conflicts)
display(qgrid_init)
## Get the editor info with Wikipedia API (get_editor() method), more details you could check: ##
## https://github.com/gesiscss/wikiwho_demo/blob/master/external/api.py ##
## https://github.com/gesiscss/wikiwho_demo/blob/master/external/wikipedia.py ##
wikipedia_dv = WikipediaDV(WikipediaAPI(lng=lng_selected))
# This is an example editor index. You could change it manully by typing in a new index from
# the above grid, e.g. 737021
editor_input_id = editors_conflicts.index[1]
# Get the editor's information in the form of pd.DataFrame
editor_info = wikipedia_dv.get_editor(int(editor_input_id))
## Display the basic information of the selected editor ##
editor_url = f'{wikipedia_dv.api.base}action=query&list=users&ususerids={editor_input_id}&usprop=blockinfo|editcount|registration|gender&format=json'
print("Editor's data can be found in:")
print(editor_url)
display(md("### Current Selection:"))
display(editor_info.to_frame('values'))
## Interactive evolution of conflict score of this editor, using ConflictListner, more details see ##
## https://github.com/gesiscss/wikiwho_demo/blob/master/visualization/conflicts_listener.py ##
## Also use the variable from the last notebook: total_actions ##
display(md(f"#### Evolution of the Conflict Score of *{editor_info['name']}*"))
# Dataframe containing the selected editor's info for interactive
editor_df = agg_actions[agg_actions['editor_id'] == the_editor['userid']].copy()
# Create a ConflictsListener instance.
conflicts_listener = ConflictsListener(editor_df, bargap = 0.8)
# Set parameters
begin_date = date(2005, 3, 1)
end_date = date(2019, 6, 1)
frequency = 'Daily' # 'Monthly', 'Daily'
# The metrics we need:
# ['Conflict Score', 'Absolute Conflict Score', 'Conflict Ratio', 'Number of Conflicts',
# 'Total Elegible Actions', ('None')]
# Note: only 'red_line' has 'None' option.
black_line = 'Conflict Score'
red_line = 'None'
print('Time range from', begin_date.strftime("%Y-%m-%d"), 'to', end_date.strftime("%Y-%m-%d"))
conflicts_listener.listen(
_range1=begin_date,
_range2=end_date,
granularity = frequency,
black = black_line,
red = red_line
)
# store the editor_input_id and editor_info for the usage in next notebook
%store editor_input_id
%store editor_info
```
---
## A.2 Words in conflict - What did this editor disagree about?
```
display(md(f"***Page: {the_page['title']} ({lng_selected.upper()})***"))
```
The WordCloud displays the most common token strings (words) that a particular editor inserted or deleted and that enter into conflict with other editors. The size of the token string in the WordCloud indicates frequency of actions.
<br>
<details>
<summary style="cursor: pointer;font-weight:bold">Source description</summary>
- **Only Conflicts**: use only the actions that are in conflict.
- **Elegible Actions**: use only the actions that can potentially enter into conflict, i.e. actions
that have occurred at least twice, e.g. the token x has been inserted twice (which necessarily implies
it was remove once), the token x has been deleted twice (which necessarily implies it was inserted twice)
- **All Actions**: use all tokens regardles conflict
</details>
```
# create and display the button
button2 = widgets.Button(description="Show strings in conflict", layout=Layout(width='180px'))
display(button2)
def on_click_token_conflict(b):
with out2:
clear_output()
display(md(f"***Editor: {the_editor['name']}***"))
# listener
listener = WCListener(sources={"tokens_source": sources}, lng=lng_selected, specific_editor=str(editor_inputname))
# visualization
actions_all = remove_stopwords(sources["tokens_all"], lng=lng_selected)
interact(listener.listen,
_range1 = widgets.DatePicker(description='Date starts', value=actions_all.sort_values('rev_time')['rev_time'].iloc[0], layout=Layout(width='25%')),
_range2 = widgets.DatePicker(description='Date ends', value=actions_all.sort_values('rev_time')['rev_time'].iloc[-1], layout=Layout(width='25%')),
source = Dropdown(options=['All Actions', 'Elegible Actions', 'Only Conflicts'], value='Only Conflicts'),
action = Dropdown(options=['Both', 'Just Insertions', 'Just Deletions'], value='Both'),
editor = fixed('All'),
stopwords = widgets.RadioButtons(options=['Not included', 'Included'], value='Not included', description='Stopwords', layout={'width': '50%'}))
out2 = Output()
display(out2)
# set the event
button2.on_click(on_click_token_conflict)
# trigger the event with the default value
on_click_token_conflict(button2)
```
<span style="color: #626262"> Try yourself! This is what will happen when you click 'Show Tokens Into Conflict' button: </span>
```
%%script false --no-raise-error
### IMPORTANT NOTE: COMMENT THE ABOVE LINE TO EXECUTE THE CELL ###
### ---------------------------------------------------------------------------------------- ###
### TRY YOURSELF! THIS IS WHAT WILL HAPPEN WHEN YOU CLICK 'Show Tokens Into Conflict' BUTTON ###
### ---------------------------------------------------------------------------------------- ###
## Filter the source data by selected editor, using the instance created in the second notebook ##
## 'conflict_calculator'. Use three of its attributes: all_actions, elegible_actions and conflicts ##
## WordCloud, core visual code lies in WCListener, then the interact function ##
## make it interactive, mode details see: ##
## https://github.com/gesiscss/wikiwho_demo/blob/master/visualization/wordcloud_listener.py ##
editor_info = the_editor
editor_input_id = editor_inputname
# Create a WCListener instance
wclistener = WCListener(sources={"tokens_source": sources}, lng=lng_selected, specific_editor=str(editor_info['userid']))
# Visualization: you could also perform it by coding!
begin_date = date(2005, 3, 1)
end_date = date(2019, 7, 4)
actions_source='Only Conflicts' # 'Elegible Actions', 'All actions', 'Only Conflicts'
action_type='Both' # 'Just Insertions', 'Just Deletions', 'Both'
editor='All'
stopwords = 'Not included' # 'Not included', 'Included'
wclistener.listen(
_range1=begin_date,
_range2=end_date,
source=actions_source,
action=action_type,
editor=editor,
stopwords=stopwords)
## This is the page you used and the editor you select in the above grid. ##
print('The page that is being used:', the_page['title'], f'({lng_selected.upper()})')
print('Selected editor:', editor_info['name'])
print('Time range from', begin_date.strftime("%Y-%m-%d"), 'to', end_date.strftime("%Y-%m-%d"))
```
---
# B. Productive activity
## B.1 Activity and productivity - How much work did they do and how much of their work was not undone?
```
display(md(f"***Page: {the_page['title']} ({lng_selected.upper()})***"))
```
Activity does not mean productivity. The following graphs helps disentangling this two components, for example by comparing the total actions performed with the total actions that survived 48 hours. As previously discussed, actions that are reversed in less than 48 hours are considered noisy, they often correspond to vandalism or low quality editions. The controls allow to select serveral traces to, for example, compare this performance in terms of insertions or deletions as, argueably, insertions could have a higher weight in terms of enriching the content an encyclopedia.
<br>
<details>
<summary style="cursor: pointer;font-weight:bold">Options description</summary>
- **adds**: number of first-time insertions
- **adds_surv_48h**: number of insertions for the first time that survived at least 48 hours
- **adds_persistent**: number of insertions for the first time that survived until, at least, the end of the month
- **adds_stopword_count**: number of insertions that were stop words
- **dels**: number of deletions
- **dels_surv_48h**: number of deletions that were not resinserted in the next 48 hours
- **dels_persistent**: number of deletions that were not resinserted until, at least, the end of the month
- **dels_stopword_count**: number of deletions that were stop words
- **reins**: number of reinsertions
- **reins_surv_48h**: number of reinsertionsthat survived at least 48 hours
- **reins_persistent**: number of reinsertionsthat survived until the end of the month
- **reins_stopword_count**: number of reinsertionsthat were stop words
</details>
```
# create and display the button
button1 = widgets.Button(description="Show Editor's Activities", layout=Layout(width='180px'))
display(button1)
def on_click_activity(b):
with out1:
clear_output()
display(md(f"***Editor: {the_editor['name']}***"))
editor_agg_actions = agg_actions[agg_actions['editor_id']==the_editor.userid]
#Listener
listener = ActionsListener(sources, lng=lng_selected)
actions = (editor_agg_actions.loc[:,'total':'total_stopword_count'].columns.append(
editor_agg_actions.loc[:,'adds':'reins_stopword_count'].columns)).values.tolist()
# Visualization
_range = get_date_slider_from_datetime(editor_agg_actions['rev_time'])
listener.actions_one_editor = editor_agg_actions
interact(listener.actions_listen,
#_range = get_date_slider_from_datetime(editor_agg_actions['year_month']),
_range1=widgets.DatePicker(description='Date starts', value=editor_agg_actions['rev_time'].iloc[0], layout=Layout(width='25%')),
_range2=widgets.DatePicker(description='Date ends', value=editor_agg_actions['rev_time'].iloc[-1], layout=Layout(width='25%')),
editor=fixed('All'),
granularity=Dropdown(options=['Yearly', 'Monthly', "Weekly", "Daily"], value='Monthly'),
black=Dropdown(options=actions, value='total'),
red=Dropdown(options= ['None'] + actions, value='total_surv_48h'),
green=Dropdown(options= ['None'] + actions, value='None'),
blue=Dropdown(options= ['None'] + actions, value='None'))
out1 = Output()
display(out1)
# set the event
button1.on_click(on_click_activity)
# trigger the event with the default value
on_click_activity(button1)
```
<span style="color: #626262"> Try yourself! This is what will happen when you click 'Show Editor's Activities' button: </span>
```
%%script false --no-raise-error
### IMPORTANT NOTE: COMMENT THE ABOVE LINE TO EXECUTE THE CELL ###
### ---------------------------------------------------------------------------------------- ###
### TRY YOURSELF! THIS IS WHAT WILL HAPPEN WHEN YOU CLICK 'Show Tokens Into Conflict' BUTTON ###
### ---------------------------------------------------------------------------------------- ###
editor_info = the_editor
editor_input_id = editor_inputname
editor_agg_actions = agg_actions[agg_actions['editor_id']==the_editor.userid]
# Create an action Listener instance
actions_listener = ActionsListener(sources, lng=lng_selected)
# Visualization: you could also perform it by coding!
begin_date = date(2005, 3, 1)
end_date = date(2019, 7, 4)
editor='All'
frequency = 'Monthly' # "Yearly","Monthly", "Weekly", "Daily"
black_line = 'total'
red_line = 'total_surv_48h'
green_line = 'None'
blue_line = 'None'
# Visualization
actions_listener.actions_one_editor = editor_agg_actions
actions_listener.actions_listen(
_range1=begin_date,
_range2=end_date,
editor=editor,
granularity=frequency,
black=black_line,
red=red_line,
green=green_line,
blue=blue_line)
## This is the page you used and the editor you select in the above grid. ##
print('The page that is being used:', the_page['title'], f'({lng_selected.upper()})')
print('Selected editor:', editor_info['name'])
print('Time range from', begin_date.strftime("%Y-%m-%d"), 'to', end_date.strftime("%Y-%m-%d"))
```
---
## B.2 Tokens owned - How much original text from the editor does (still) exist?
```
display(md(f"***Page: {the_page['title']} ({lng_selected.upper()})***"))
```
Another important metric to asses the impact of an editor is its ownership (or authorship), i.e. who wrote certain word for the first time. The ownership (or authorship) is based in the WikiWho algorithm ([Flöck & Acosta, 2014](http://wwwconference.org/proceedings/www2014/proceedings/p843.pdf)). The time line belows displays the percentage of tokens (Y-axis) owned by an editor at any given point of time (X-axis).
```
# create and display the button
button3 = widgets.Button(description="Show Ownership")
display(button3)
def on_click_ownership(b):
with out3:
clear_output()
display(md(f"***Editor: {the_editor['name']}***"))
all_actions = remove_stopwords(sources["tokens_all"], lng=lng_selected)
listener = OwnedListener(all_actions, str(editor_inputname))
traces = ['Tokens Owned', 'Tokens Owned (%)']
# Visualization
interact(listener.listen,
#_range = get_date_slider_from_datetime(listener.days),
_range1=widgets.DatePicker(description='Date starts', value=listener.days.iloc[-1], layout=Layout(width='25%')),
_range2=widgets.DatePicker(description='Date ends', value=listener.days.iloc[0], layout=Layout(width='25%')),
granularity=Dropdown(options=['Yearly', 'Monthly', "Weekly", 'Daily'], value='Monthly'),
trace=Dropdown(options=traces, value='Tokens Owned (%)', description='metric'))
out3 = Output()
display(out3)
# set the event
button3.on_click(on_click_ownership)
# trigger the event with the default value
on_click_ownership(button3)
```
<span style="color: #626262"> Try yourself! This is what will happen when you click 'Show Ownership' button: </span>
```
%%script false --no-raise-error
### IMPORTANT NOTE: COMMENT THE ABOVE LINE TO EXECUTE THE CELL ###
### ----------------------------------------------------------------------------- ###
### TRY YOURSELF! THIS IS WHAT WILL HAPPEN WHEN YOU CLICK 'Show Ownership' BUTTON ###
### ----------------------------------------------------------------------------- ###
editor_info = the_editor
editor_input_id = editor_inputname
## This is the page you used and the editor you select in the above grid. ##
print('The page that is being used:', the_page['title'], f'({lng_selected.upper()})')
print('Selected editor:', editor_info['name'])
## Tokens ownership visualization, core visual code lies in OwnedListener, then the interact function ##
## make it interactive, mode details see: ##
## https://github.com/gesiscss/wikiwho_demo/blob/master/visualization/owned_listener.py ##
# Get all actions of all editors in this page, using the 'conflict_calculator' instance, created
# in the second notebook.
all_actions_cal = remove_stopwords(sources["tokens_all"], lng=lng_selected)
# Creat an OwnedListener instance for the selected editor.
ownedlistener = OwnedListener(all_actions_cal, str(editor_inputname))
owned_traces = ['Tokens Owned', 'Tokens Owned (%)']
# Visualization: you could also perform it by coding!
begin_date = date(2005, 9, 18)
end_date = date(2021, 3, 16)
frequency = 'Monthly' # 'Daily', 'Yearly', 'Monthly'
owned_trace = 'Tokens Owned (%)' # 'Tokens Owned', 'Tokens Owned (%)'
print('Time range from', begin_date.strftime("%Y-%m-%d"), 'to', end_date.strftime("%Y-%m-%d"))
ownedlistener.listen(
_range1=begin_date,
_range2=end_date,
granularity=frequency,
trace=owned_trace
)
scroll_to_top = """
<script>
document.getElementById('notebook').scrollIntoView();
</script>
"""
display(HTML(scroll_to_top))
```
| github_jupyter |
# Unity ML-Agents
## Environment Basics
This notebook contains a walkthrough of the basic functions of the Python API for Unity ML-Agents. For instructions on building a Unity environment, see [here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md).
### 1. Set environment parameters
Be sure to set `env_name` to the name of the Unity environment file you want to launch.
```
env_name = "3DBall" # Name of the Unity environment binary to launch
train_mode = True # Whether to run the environment in training or inference mode
```
### 2. Load dependencies
```
import matplotlib.pyplot as plt
import numpy as np
from unityagents import UnityEnvironment
%matplotlib inline
```
### 3. Start the environment
`UnityEnvironment` launches and begins communication with the environment when instantiated.
Environments contain _brains_ which are responsible for deciding the actions of their associated _agents_. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
env = UnityEnvironment(file_name=env_name)
# Examine environment parameters
print(str(env))
# Set the default brain to work with
default_brain = env.brain_names[0]
brain = env.brains[default_brain]
```
### 4. Examine the observation and state spaces
We can reset the environment to be provided with an initial set of observations and states for all the agents within the environment. In ML-Agents, _states_ refer to a vector of variables corresponding to relevant aspects of the environment for an agent. Likewise, _observations_ refer to a set of relevant pixel-wise visuals for an agent.
```
# Reset the environment
env_info = env.reset(train_mode=train_mode)[default_brain]
# Examine the state space for the default brain
print("Agent state looks like: \n{}".format(env_info.vector_observations[0]))
# Examine the observation space for the default brain
for observation in env_info.visual_observations:
print("Agent observations look like:")
if observation.shape[3] == 3:
plt.imshow(observation[0,:,:,:])
else:
plt.imshow(observation[0,:,:,0])
```
### 5. Take random actions in the environment
Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the `action_space_type` of the default brain.
Once this cell is executed, 10 messages will be printed that detail how much reward will be accumulated for the next 10 episodes. The Unity environment will then pause, waiting for further signals telling it what to do next. Thus, not seeing any animation is expected when running this cell.
```
for episode in range(10):
env_info = env.reset(train_mode=train_mode)[default_brain]
done = False
episode_rewards = 0
while not done:
if brain.vector_action_space_type == 'continuous':
env_info = env.step(np.random.randn(len(env_info.agents),
brain.vector_action_space_size))[default_brain]
else:
env_info = env.step(np.random.randint(0, brain.vector_action_space_size,
size=(len(env_info.agents))))[default_brain]
episode_rewards += env_info.rewards[0]
done = env_info.local_done[0]
print("Total reward this episode: {}".format(episode_rewards))
```
### 6. Close the environment when finished
When we are finished using an environment, we can close it with the function below.
```
env.close()
```
| github_jupyter |
# Geolocating Aerial Photography
```
import cv2
import gdal
import geopy
import glob
import imutils
import math
import os
import progressbar
import scipy.linalg
import matplotlib.image
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from geopy import distance
from PIL import Image
from pyquaternion import Quaternion
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
class TopoCompare():
def prep_photos(self, images_path, dest_path, crop_left=0, crop_top=0, crop_right=0, crop_bottom=0):
"""
Converts .tif files to .jpg files for ingestion into COLMAP. May also crop images.
Inputs:
images_path: path to folder containing images to be prepped.
dest_path: path to folder to where converted images will be saved.
crop_left: pixels to crop on left side of photo.
crop_top: pixels to crop off top of photo.
crop_right: pixels to crop on right side of photo.
crop_bottom: pixels to crop from bottom of photo.
"""
for file in os.listdir(images_path):
if (len(file.split('.')) > 1) and (file.split('.')[1] == 'tif'):
file_path = os.path.join(images_path, file)
image = Image.open(file_path)
width, height = image.size
image = image.crop((crop_left, crop_top, width-crop_right, height-crop_bottom))
image.save(os.path.join(dest_path,file.split('.')[0]+"_cropped.jpg"))
else:
pass
return None
def ingest_images(self, path):
"""
Ingests image center information from the images.txt file generated after pressing file>export model as text in COLMAP.
inputs:
path: path to images.txt file created by exporting sparse model as text from COLMAP
outputs:
df: dataframe containing pertinent information from the images.txt file
"""
#top 4 rows are header and every other row after that is not important to this
skiprows = [0,1,2,3]
#counts the number of lines in the file.
num_lines = 0
with open(path, 'r') as f:
for line in f:
num_lines += 1
skiprows.extend([i for i in range(5,num_lines,2)])
df = pd.read_csv(path, header=None, skiprows=skiprows, delimiter=' ')
df.columns = ['IMAGE_ID', 'QW', 'QX', 'QY', 'QZ', 'TX', 'TY', 'TZ', 'CAMERA_ID', "NAME"]
return df
def find_camera_centerpoints(self, df):
"""
Takes the resulting dataframe from "ingest_images" and returns a dataframe with the explicit
x,y,z centerpoints of the cameras.
inputs:
df: dataframe that resutls from properly executing ingest_image_centers.
outputs:
df_return: input df with X,Y,Z centerpoints appended
"""
df_locs = pd.DataFrame([])
for row in range(0,df.shape[0]):
quaternion = Quaternion(w=df.iloc[row][1], x=df.iloc[row][2], y=df.iloc[row][3], z=df.iloc[row][4])
rot = quaternion.rotation_matrix
trans = np.array([df.iloc[row][5], df.iloc[row][6], df.iloc[row][7]])
pos = pd.DataFrame(np.matmul(-rot.T,trans))
df_locs = pd.concat([df_locs, pos.T])
df_locs.index = range(0,df.shape[0])
df_locs.columns = ['X', 'Y', 'Z']
df_return = pd.concat([df, df_locs], axis = 1, sort= True)
return df_return
def ingest_points(self, path):
"""
Ingests points information from the points3D.txt file generated after pressing file>export model as text in COLMAP.
inputs:
path: path to points3D.txt file created by exporting sparse model as text from COLMAP
outputs:
df_points: dataframe containing pertinent information from the points3D.txt file
"""
skiprows = [0,1,2]
df_points = pd.read_csv(path, header=None, usecols=[0,1,2,3,7,8], skiprows=skiprows, delimiter=' ')
df_points.columns = ['POINT3D_ID', 'X', 'Y', 'Z', 'ERROR','IMAGE_ID']
df_points.index = range(df_points.shape[0])
return df_points
def reduce_points(self, df_points, df_camera_centerpoints):
"""
Excludes points from the points3D dataframe that are outside the X and Y bounds of the df_camera_centerpoints
X and Y range. This increases the accuracy of the 3D representation and excludes datapoints that may be warped
due to their position outside the boundaries of the image photomerge.
inputs:
df_points: the output of a correctly executed ingest_points function.
df_camera_centerpoints: the output of a correctly executed find_camera_centerpoints function.
outpus:
df_points_reduced: df_points excluding points where X,Y is gt or lt maximum extent of X,Y in df_camera_centerpoints
"""
df_points_reduced = df_points[(df_points['X'] < df_camera_centerpoints['X'].max())]
df_points_reduced = df_points_reduced[(df_points['X'] > df_camera_centerpoints['X'].min())]
df_points_reduced = df_points_reduced[(df_points['Y'] < df_camera_centerpoints['Y'].max())]
df_points_reduced = df_points_reduced[(df_points['Y'] > df_camera_centerpoints['Y'].min())]
return df_points_reduced
def detect_and_describe(self, image, alg = "SIFT"):
"""
Modified from https://www.pyimagesearch.com/2016/01/11/opencv-panorama-stitching/.
SIFT algorithm reqiores opencv-python 3.4.2.17 and opencv-contrib-python 3.4.2.17
because later versions make the SIFT method proprietary.
returns keypoints and feature vectors from one of several keypoint detection algorithms.
inputs:
image: cv2.image for which keypoints will be detected
alg: string ["SIFT", "ORB", or "BRIKS"] determins which algorithm will be used
outputs:
kps: locations of keypoints
features: feature vectors fo keypoints
"""
# convert the image to grayscale
#gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
if alg == "SIFT":
descriptor = cv2.xfeatures2d.SIFT_create()
elif alg == "ORB":
descriptor = cv2.ORB_create()
elif alg == "BRISK":
descriptor = cv2.BRISK_create()
(kps, features) = descriptor.detectAndCompute(image, None)
kps = np.float32([kp.pt for kp in kps])
true_kps = []
true_features = []
count = 0
for kpsx, kpsy in kps:
#if any of the surrounding pixels are zero (have a color of [84,1,68]), skip them
if (image[int(round(kpsy,0)),int(round(kpsx,0))] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0))+1,int(round(kpsx,0))] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0)),int(round(kpsx,0))+1] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0))-1,int(round(kpsx,0))] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0)),int(round(kpsx,0))-1] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0))-1,int(round(kpsx,0))-1] != np.array([84,1,68])).all() or \
(image[int(round(kpsy,0))+1,int(round(kpsx,0))+1] != np.array([84,1,68])).all():
true_kps.append([kpsx, kpsy])
true_features.append(features[count])
count+=1
true_kps = np.float32(true_kps)
true_features = np.array(true_features)
return (true_kps, true_features)
def match_keypoints(self, kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh):
"""
Modified from https://www.pyimagesearch.com/2016/01/11/opencv-panorama-stitching/.
returns the matching keypoints and homography matrix based on feature vectors returned from detect_and_describe.
inputs:
kpsA: list keypoints from image A
kpsB: list keypoints from image B
featuresA: features from image A
featuresB: features from image B
ratio: lowe's ratio test ratio
reprojThresh: threshold number of keypoint matches for reprojection to occur
outputs:
matches: list of matching keypoints
H: homography matrix
status: status of homography matrix search
"""
# compute the raw matches and initialize the list of actual matches
matcher = cv2.DescriptorMatcher_create("BruteForce")
rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
matches = []
# loop over the raw matches
for m in rawMatches:
# ensure the distance is within a certain ratio of each other (i.e. Lowe's ratio test)
if len(m) == 2 and m[0].distance < m[1].distance * ratio:
matches.append((m[0].trainIdx, m[0].queryIdx))
# computing a homography requires at least 4 matches
if len(matches) > 4:
# construct the two sets of points
ptsA = np.float32([kpsA[i] for (_, i) in matches])
ptsB = np.float32([kpsB[i] for (i, _) in matches])
# compute the homography between the two sets of points
(H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, reprojThresh)
# return the matches along with the homograpy matrix and status of each matched point
return (matches, H, status)
# otherwise, no homograpy could be computed
return None
def procrustes(self, X, Y, scaling=True, reflection=False):
"""
Modified from https://stackoverflow.com/questions/18925181/procrustes-analysis-with-numpy
A port of MATLAB's `procrustes` function to Numpy.
Procrustes analysis determines a linear transformation (translation,
reflection, orthogonal rotation and scaling) of the points in Y to best
conform them to the points in matrix X, using the sum of squared errors
as the goodness of fit criterion.
inputs:
X: matrix of input coordinates
Y: matrix of target coordinates. Must have equal number of points (rows) to X, but may have
fewer dimensions than X
scaling: if False, the scaling component of the transformation is forced to 1.
reflection: if 'best' (default), the transformation solution may or may not include a reflection
component, depending on which fits the data best. setting reflection to True or False forces a
solution with reflection or no reflection respectively.
outputs
tform: a dict specifying the rotation, translation and scaling that maps X --> Y
"""
n,m = X.shape
ny,my = Y.shape
muX = X.mean(0)
muY = Y.mean(0)
X0 = X - muX
Y0 = Y - muY
ssX = (X0**2.).sum()
ssY = (Y0**2.).sum()
# centred Frobenius norm
normX = np.sqrt(ssX)
normY = np.sqrt(ssY)
# scale to equal (unit) norm
X0 /= normX
Y0 /= normY
if my < m:
Y0 = np.concatenate((Y0, np.zeros(n, m-my)),0)
# optimum rotation matrix of Y
A = np.dot(X0.T, Y0)
U,s,Vt = np.linalg.svd(A,full_matrices=False)
V = Vt.T
T = np.dot(V, U.T)
if reflection is not 'best':
# does the current solution use a reflection?
have_reflection = np.linalg.det(T) < 0
# if that's not what was specified, force another reflection
if reflection != have_reflection:
V[:,-1] *= -1
s[-1] *= -1
T = np.dot(V, U.T)
traceTA = s.sum()
if scaling:
# optimum scaling of Y
b = traceTA * normX / normY
# standarised distance between X and b*Y*T + c
d = 1 - traceTA**2
# transformed coords
Z = normX*traceTA*np.dot(Y0, T) + muX
else:
b = 1
d = 1 + ssY/ssX - 2 * traceTA * normY / normX
Z = normY*np.dot(Y0, T) + muX
# transformation matrix
if my < m:
T = T[:my,:]
c = muX - b*np.dot(muY, T)
#transformation values
tform = {'rotation':T, 'scale':b, 'translation':c}
return tform
def match_2_images(self, kpsA, kpsB, featuresA, featuresB, num_matches, ratio, reprojThresh):
"""
Returns the results for procrustes from the keypoints and features of two images.
inputs:
kpsA: list keypoint locations of image A
kpsB: list keypoint locations of image B
featuresA: list feature vectors describimg kpsA from imageA
featuresB: list feature vectors describing kpsB from imageB
num_matches: int number of to be considerded for tform calculation
ratio: ratio for lowe's ratio test
reprojThresh: threshold number of keypoint matches for reprojection to occur
outputs:
tform: a dict specifying the rotation, translation and scaling that maps X --> Y
"""
matches, H, status = self.match_keypoints(kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh)
if len(matches) >= num_matches:
matchesA = []
matchesB = []
for match in matches:
matchesA.append(list(kpsA[match[1]]))
matchesB.append(list(kpsB[match[0]]))
matchesA = np.array(matchesA)
matchesB = np.array(matchesB)
tform = self.procrustes(matchesA, matchesB, True, False)
return tform
else:
return None
def map_photo_centerpoints(self, df_camera_centerpoints, num_x, num_y):
"""
returns a pd.DataFrame with the relative positions of the images given in df_camera_centerpoints
inputs:
df_camera_centerpoints: pd.DataFrame, df_camera_centerpoints output form find_camera_centerpoints
num_x: number of photos in the x direction.
num_y: number of photos in the y direction.
outputs:
df_photo_array: pd.DataFrame with the relative positions of the images given in df_camera_centerpoints
"""
df_photo_array = pd.DataFrame(np.array(df_camera_centerpoints.sort_values(["X", "Y"]).NAME).reshape(num_x,num_y)).T
return df_photo_array
def measure_scale(self, df_images, df_points_reduced, df_photo_array, scale, height_in, width_in):
"""
Measures a lower bound for the scale in the images used for reconstruction. Returns a value in terms
of meters per unit x and meters per unit y
inputs:
df_images: data frame produced by correctly specified call of ingest_images function
df_points_reduced: data frame produced by correctly specified call of reduce_points function
df_photo_array: dataframe produced by correctly specified call of map_photo_centerpoints function
scale: scale of scanned image e.g. (1:20000 scale image would have scale = 20000)
height_in: height of scanned image in inches.
width_in: width of scanned image in inches.
Outputs:
x_scale_m: lower limit of meters per unit in direction x
y_scale_m: lower limit of meters per unit in direction y
"""
df_photo_array_x = pd.DataFrame(df_photo_array.iloc[:,1:df_photo_array.shape[1]-1])
df_photo_array_y = df_photo_array.iloc[1:df_photo_array.shape[0]-1,:]
image_id_dict_x = {}
for row in df_photo_array_x.values:
for NAME in row:
image_id_dict_x[NAME] = (df_images[df_images['NAME']==NAME]['IMAGE_ID'].values[0])
image_id_dict_y = {}
for row in df_photo_array_y.values:
for NAME in row:
image_id_dict_y[NAME] = (df_images[df_images['NAME']==NAME]['IMAGE_ID'].values[0])
x_diff = []
for IMAGE_ID in image_id_dict_x.values():
x_diff.append(df_points_reduced[df_points_reduced['IMAGE_ID']==IMAGE_ID].X.max()-df_points_reduced[df_points_reduced['IMAGE_ID']==IMAGE_ID].X.min())
y_diff = []
for IMAGE_ID in image_id_dict_y.values():
y_diff.append(df_points_reduced[df_points_reduced['IMAGE_ID']==IMAGE_ID].Y.max()-df_points_reduced[df_points_reduced['IMAGE_ID']==IMAGE_ID].Y.min())
#rescale to meters.
photo_dims_y_mi = height_in * scale/(12*5280)
photo_dims_x_mi = width_in * scale/(12*5280)
photo_dims_y_km = photo_dims_y_mi*1.60934
photo_dims_x_km = photo_dims_x_mi*1.60934
x_scale_m = (photo_dims_x_km * 1000)/max(x_diff) #meters per x
y_scale_m = (photo_dims_y_km * 1000)/max(y_diff) #meters per y
return x_scale_m, y_scale_m
def get_height_model(self, df_points_reduced, scale_m, x_scale_m, y_scale_m):
"""
"Smears" out the point cloud that was constructed by COLMAP to fill an array that has a scale of scale_m
on a side. The larger the scale_m value the more points get averaged and the lower the resolution of the resulting
array, but the array is also less sparse since fewer boxes have null values.
inputs:
df_points_reduced: pd.DataFrame the result of self.reduce_points
scale_m: the size of the "boxes" in the resulting topographic model, in meters.
x_scale_m: the current scale of x in meters (the result of self.measure_scale)
y_scale_m: the current scale of y in meters (the result of self.measure_scale)
output:
heights: np.array of the topography at a granualrity as specified by scale_m
"""
x_range = int((df_points_reduced.X.max() - df_points_reduced.X.min())/(scale_m/x_scale_m))
y_range = int((df_points_reduced.Y.max() - df_points_reduced.Y.min())/(scale_m/y_scale_m))
#progress bar
bar = progressbar.ProgressBar(maxval=y_range, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
count = 0
df_heights = pd.DataFrame([])
for y in range(y_range):
x_row = []
for x in range(x_range):
x_row.append(df_points_reduced[(df_points_reduced["X"] < df_points_reduced.X.min()+(x+1)*(scale_m/x_scale_m)) &
(df_points_reduced["X"] > df_points_reduced.X.min()+(x)*(scale_m/x_scale_m)) &
(df_points_reduced["Y"] < df_points_reduced.Y.min()+(y+1)*(scale_m/y_scale_m)) &
(df_points_reduced["Y"] > df_points_reduced.Y.min()+(y)*(scale_m/y_scale_m))].Z.mean())
df_heights = pd.concat([df_heights, pd.DataFrame(x_row).T], axis = 0)
bar.update(y+1)
heights = np.array(df_heights)
return heights
def rescale_usgs_topo(self, filepath, scale_m, scale_usgs):
"""
Rescales a .img file of scale scale_usgs (in m) to a scale of scale_m. scale_m in this function should be the
same as scale_m in self.get_heights_model for the direct comparison method to work correctly.
inputs:
filepath: str filepath of .img topography data file supplied by usgs.
scale_m: int scale of resulting array in m
scale_usgs: int current scale of usgs data (in m, usually either 10 or 30)
outputs:
usgs_topo: np.array with rescaled usgs data.
"""
geo = gdal.Open(filepath)
drv = geo.GetDriver()
north = geo.ReadAsArray()
scale = int(scale_m/scale_usgs)
scale_half = int(scale/2)
df_usgs_topo = pd.DataFrame([])
num_y = north.shape[0]
bar = progressbar.ProgressBar(maxval=num_y, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
count = 0
for y in range(scale_half,north.shape[1]-scale_half,scale):
x_row = []
for x in range(scale_half,north.shape[0]-scale_half,scale):
x_row.append(pd.DataFrame(north).iloc[x-scale_half:x+scale_half,y-scale_half:y+scale_half].mean().mean())
df_usgs_topo = pd.concat([df_usgs_topo, pd.DataFrame(x_row)], axis =1)
bar.update(y+1)
usgs_topo = np.array(df_usgs_topo)
return usgs_topo
def compare_as_images(self, mhnc, topo, mask, num_matches, step_divisor, ratio, reprojThresh, resize_multiplier = 2):
"""
Performs a windowed search where topographies are treated as images. If the number of matches is greater than
or equal to num_matches then the x and y values are added to a list of possible matches to be searched pixel-wise.
inputs:
mhnc: np.ma.masked_array masked heights normed and centered
topo: np.array usgs topographic data to be searched
num_matches: int number of matches to add image search to pixelwise search
step_divisor: int number of steps to take per mhnc window size in x and y directions
ratio: float (0.0-1.0) ratio for match_keypoints function
reprojThresh: int reprojThresh for match_keypoints function
resize_multiplier: int how much to increase the size of the image to match keypoints
outputs:
possible_matches: list of lists with y, x coordinates of match points with corresponding tform outputs.
"""
mhnc_height, mhnc_width = mhnc.shape
topo_height, topo_width = topo.shape
possible_matches = []
#Progress bar
bar = progressbar.ProgressBar(maxval=int((topo_height-mhnc_height)/(mhnc_height//step_divisor)+1), widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
mhnc_filled = np.ma.filled(mhnc, 0)
matplotlib.image.imsave('tmp_mhnc.png', mhnc_filled)
imageA = cv2.imread('tmp_mhnc.png')
imageA = cv2.resize(imageA, (imageA.shape[1]*resize_multiplier,imageA.shape[0]*resize_multiplier), interpolation = cv2.INTER_AREA)
kpsA, featuresA = self.detect_and_describe(imageA, 'SIFT')
counter = 0
for y in range(0,topo_height-mhnc_height,mhnc_height//step_divisor):
for x in range(0,topo_width-mhnc_width,mhnc_width//step_divisor):
topo_tmp = topo[y:y+mhnc_height,x:x+mhnc_width]
topo_tmp = np.ma.masked_array(topo_tmp, mask)
topo_tmp = np.ma.filled(topo_tmp, 0)
if (topo_tmp == np.zeros((mhnc_height,mhnc_width))).all():
pass
else:
matplotlib.image.imsave('tmp_topo.png', topo_tmp)
imageB = cv2.imread('tmp_topo.png')
imageB = cv2.resize(imageB, (imageB.shape[1]*resize_multiplier,imageB.shape[0]*resize_multiplier), interpolation = cv2.INTER_AREA)
kpsB, featuresB = self.detect_and_describe(imageB, 'SIFT')
if len(kpsA) > 0 and len(kpsB) > 1:
if self.match_keypoints(kpsA, kpsB, featuresA, featuresB, ratio=ratio, reprojThresh=reprojThresh) is not None:
matches, H, status = self.match_keypoints(kpsA, kpsB, featuresA, featuresB, ratio=ratio, reprojThresh=reprojThresh)
tform = self.match_2_images(kpsA, kpsB, featuresA, featuresB, num_matches, ratio, reprojThresh)
if len(matches) >= num_matches:
possible_matches.append([y,x,tform])
bar.update(counter)
counter += 1
os.remove("tmp_topo.png")
os.remove("tmp_mhnc.png")
return possible_matches
def compare_pixelwise(self, mhnc, topo, mask, start_x, start_y, match_x, match_y, step_divisor):
"""
Performs a pixelwise search in the area around a "possible_match." Sums the differences in Z value given the for
all pixels in search window, adds result to df_diff and then moves one pixel to the right. At right edge of window
moves down one pixel.
inputs:
mhnc: np.ma.masked_array masked heights normed and centered
topo: np.array usgs topographic data to be searched
mask: np.ma.mask mask of mhnc
start_x: int x coordinant of starting location in original topo reference frame
start_y: int y coordinate of starting location in original topo reference frame
match_x: int x coordinate of starting location in rotated topo reference frame
match_y: int x coordinate of starting location in rotated topo reference frame
step_divisor: int number of steps to take per mhnc window size in x and y directions
outputs:
df_diff: pd.DataFrame of differences with cooresponding x and y locations in original topo reference frame
"""
mhnc_height, mhnc_width = mhnc.shape
topo_height, topo_width = topo.shape
#matplotlib.image.imsave('mhnc.png', mhnc)
#mhnc_image = matplotlib.image.imread('mhnc.png')
diff = []
for y_min in range(max(start_y-mhnc_height//step_divisor,0),min(start_y+mhnc_height//step_divisor, topo_height-mhnc_height//step_divisor)):
y_max = y_min + mhnc_height
for x_min in range(max(start_x-mhnc_width//step_divisor,0),min(start_x+mhnc_width//step_divisor, topo_width-mhnc_width//step_divisor)):
x_max = x_min + mhnc_width
new_topo = topo[y_min:y_max, x_min:x_max]
new_topo_normed = (new_topo - np.min(new_topo))/np.ptp(new_topo)
if mask.shape == new_topo_normed.shape:
mntn = np.ma.masked_array(new_topo_normed, mask)
difference = abs(mntn - mhnc).sum().sum()
diff.append([x_min, y_min, difference])
else:
diff.append([x_min, y_min, 9999])
df_diff = pd.DataFrame.from_records(diff)
return df_diff
def rotate(self, origin, point, angle):
"""
This function is modified from:
https://stackoverflow.com/questions/34372480/rotate-point-about-another-point-in-degrees-python
Rotate a point counterclockwise by a given angle around a given origin.
origin: 2-tuple of ints (x,y) the point around which the figure is to be rotated.
point: 2-tuple of ints (x,y) the point to rotate.
angle: float or int the angle (in degrees) to rotate the point.
"""
angle = math.radians(angle)
ox, oy = origin
px, py = point
qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy)
qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy)
ans = [qx, qy]
return ans
def topo_compare(self, images_loc, points_loc, state, num_pic_x, num_pic_y, scale, height_in, width_in, meters_per_pixel=180, center_pixels = 15, step_divisor=4, num_matches=15, ratio=0.8, reprojThresh=4, demo=False):
"""
This is the main function of the "Geolocating Aerial Photography" project. It identifies
the "best fit" location matching topography from COLMAPS.
inputs:
images_loc: str path to images.txt file generated by COLMAP
points_loc: str path to points3D.txt file generated by COLMAP
state: str only accepted string is "California" at the moment
num_pic_x: int number of images in the x direction
num_pic_y: int number of images in the y direction
scale: int scale of image (1:20000 scale would be 20000)
height_in: int or float height of input images in inches
width_in: int or float width of input images in inches
meters_per_pixel: int (default 180) meters per pixel in reconstructed topography and in usgs topography
center_pixels: int (default 15) number of pixels to crop form the edges of the reconstruction to reduce np.nans.
step_divisor: int (default 4) number of steps to take per mhnc window size in x and y directions
num_matches: (default 8) int number of matches to add image search to pixelwise search
ratio: (default 0.8) lowe's ratio test ratio
reprojThresh: (default 4) threshold number of keypoint matches for reprojection to occur
demo bool if True only runs the search over topo file imgn35w120_1.npy - if False runs search over whole of specified state.
outputs:
mhnc: np.ma.masked_array the normalized topography of the photographed area fed into COLAMP
df_diff: pd.DataFrame dataframe containing all the pixelwise differences and their x and y coordinates
df_min: pd.DataFrame dataframe containing the x and y coordinates of the location with the lowest pixelwise difference
"""
#Load state topography from USGS
if demo == True:
topo = np.load(os.path.join('..','geo_data','california','imgn35w120_1.npy'))
elif demo == False and state.lower() == "california":
topo = np.zeros(((42-32)*601,(125-113)*601))
for npy in glob.glob('../geo_data/california/*.npy'):
tmp = np.load(npy)
n = int(npy.split('imgn')[-1][0:2])
w = int(npy.split('w')[-1][0:3])
topo[601*-(n-42):601*-(n-43),601*(-(w-125)):601*(-(w-126))] = tmp
topo[topo < -86] = 0
#Ingest data from COLMAP
print('Constructing Heights Model')
df_images = self.ingest_images(images_loc)
df_points = self.ingest_points(points_loc)
df_camera_centerpoints = self.find_camera_centerpoints(df_images)
df_points_reduced = self.reduce_points(df_points, df_camera_centerpoints)
df_photo_array = self.map_photo_centerpoints(df_camera_centerpoints, num_pic_x, num_pic_y)
#Measure scale and multiply by correction factor
x_scale_m, y_scale_m = self.measure_scale(df_images, df_points_reduced, df_photo_array, scale, height_in, width_in)
x_scale_m = 1.40*x_scale_m #rescale determined emperically
y_scale_m = 1.40*y_scale_m #rescale determined emperically
heights = self.get_height_model(df_points_reduced, meters_per_pixel, x_scale_m, y_scale_m)
heights = -heights #invert z axis per COLMAP output
#Remove high and low topography as they tend to be anomalies from COLMAP
heights[heights >= np.nanpercentile(heights, 99)] = np.nan
heights[heights <= np.nanpercentile(heights, 1)] = np.nan
#Mask array where COLMAP did not make a reconstruction or at 1st, 99th percentile
mask = np.where(np.isnan(heights), 1, 0)
masked_heights = np.ma.masked_array(heights, mask)
masked_heights_norm = (masked_heights - np.min(masked_heights))/np.ptp(masked_heights) #norm 0-1
mhnc = masked_heights_norm[center_pixels:-center_pixels,center_pixels:-center_pixels]
mask_centered = mhnc.mask
mhnc_filled = np.ma.filled(mhnc, 0.5)
#Image Search
print('Image Search')
possible_matches = self.compare_as_images(mhnc, topo, mask_centered, num_matches, step_divisor, ratio, reprojThresh)
print(len(possible_matches))
#Progress bar
print('Pixelwise Search')
count = 0
bar = progressbar.ProgressBar(maxval=len(possible_matches), widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
df_diff = pd.DataFrame([])
#loop through possible matches, if translation is too great then pass
for match in possible_matches:
count+=1
bar.update(count)
if (match[2]['translation'][0] > 100) or (match[2]['translation'][1] > 100) and (match[2]['scale'] >= 2) and (match[2]['scale'] <= 0.5):
pass
else:
#assign rotation angle from tform rotation matrix
if match[2] and match[2]['rotation'][0,1] <= 1 and match[2]['rotation'][0,1] >= -1 and match[2]['rotation'][0,0] <= 1 and match[2]['rotation'][0,0] >= -1:
angle1 = -math.degrees(math.asin(match[2]['rotation'][0,1]))
angle2 = math.degrees(math.acos(match[2]['rotation'][0,0]))
if round(angle1,2) == round(angle2,2):
angle = angle1
else:
angle = angle1 + 180
#Perform pixelwise search of rotated area
if not math.isnan(angle):
topo_rotated = imutils.rotate_bound(topo, angle)
new_coords = self.rotate((topo.shape[0]//2,topo.shape[1]//2), (match[0],match[1]), angle)
new_coords[0] = int(max(round(new_coords[0] + (topo_rotated.shape[0] - topo.shape[0])//2,0),0))
new_coords[1] = int(max(round(new_coords[1] + (topo_rotated.shape[1] - topo.shape[1])//2,0),0))
diff = self.compare_pixelwise(mhnc, topo_rotated, mask_centered, new_coords[1], new_coords[0], match[1], match[0], step_divisor)
diff['3'] = angle
df_diff = pd.concat([df_diff, diff])
else:
pass
#reindex df_diff and find minimum difference
df_diff.index = range(df_diff.shape[0])
df_min = df_diff[df_diff[2] == df_diff[2].min()]
#Find lat and long of minimum coordinates
x_min = df_min.iloc[0][0]
y_min = df_min.iloc[0][1]
x_min_local = x_min-(601*(x_min//601))
y_min_local = y_min-(601*(y_min//601))
bearing = 180 - math.degrees(math.atan((meters_per_pixel*x_min_local)/(meters_per_pixel*y_min_local)))
if demo == True:
start = geopy.Point(35,-120)
elif demo == False and state.lower() == "california":
start = geopy.Point(42-(y_min//601),-125+(x_min//601))
d = geopy.distance.distance(meters = math.sqrt(((meters_per_pixel*x_min_local)**2 + (meters_per_pixel*y_min_local)**2)))
destination = d.destination(point=start, bearing=bearing)
df_min['best_fit_lat'] = destination.latitude
df_min['best_fit_long'] = destination.longitude
df_min.columns = ['x_pixels', 'y_pixels', 'min_value', 'rotation', 'best_fit_lat', 'best_fit_long']
print("Complete")
return mhnc, df_diff, df_min
```
## Prepare Photos
```
TC = TopoCompare()
TC.prep_photos(images_path = '../images/c-300/tif', \
dest_path = '../images/c-300/jpg/', \
crop_left = 0, \
crop_top = 0, \
crop_right = 0, \
crop_bottom = 0)
```
## Demo
```
TC = TopoCompare()
mhnc, df_diff, df_min = TC.topo_compare(images_loc = '../colmap_reconstructions/btm-1954/images.txt', \
points_loc = '../colmap_reconstructions/btm-1954/points3D.txt', \
state = 'California', \
num_pic_x = 6, \
num_pic_y = 12, \
scale = 20000, \
height_in = 9, \
width_in = 9, \
num_matches = 20, \
demo=True)
```
## Results
```
df_min
```
## Plotting
```
topo = np.load(os.path.join('..','..','geo_data','states','california.npy'))
topo = np.load(os.path.join('..','geo_data','california','imgn35w120_1.npy'))
topo_rotated = imutils.rotate_bound(topo, df_min.iloc[0]['rotation'])
width, height = mhnc.shape
x_min = int(df_min.iloc[0][0])
y_min = int(df_min.iloc[0][1])
plt.figure(1, figsize=(20,10))
plt.subplot(221)
plt.title('Input Data')
plt.imshow(mhnc)
plt.subplot(222)
plt.imshow(topo_rotated[y_min:y_min+width,x_min:x_min+height])
plt.title("USGS Topography")
plt.subplot(223)
plt.plot(df_diff[2])
plt.ylim(0,2400)
plt.xlabel('DataFrame index value')
plt.ylabel('Sum of pixel-wise differences')
plt.title("Pixelwise Difference")
plt.subplot(224)
plt.plot(df_diff[2][159000:162000])
plt.ylim(0,2400)
plt.xlabel('DataFrame index value')
plt.ylabel('Sum of pixel-wise differences')
plt.title("Pixelwise Difference Zoomed")
plt.show()
```
## Search All of California
### !!! WARNING this code takes approx 6 hours to run !!!
```
TC = TopoCompare()
mhnc, df_diff, df_min = TC.topo_compare(images_loc = '../colmap_reconstruction/btm-1954_7/images.txt', \
points_loc = '../colmap_reconstruction/btm-1954_7/points3D.txt', \
state = 'California', \
num_pic_x = 6, \
num_pic_y = 12, \
scale = 20000,\
height_in = 9,\
width_in = 9,\
num_matches = 20, \
demo=False)
```
## Results
```
df_min
```
## Plotting
```
topo = np.load(os.path.join('..','..','geo_data','states','california.npy'))
topo_rotated = imutils.rotate_bound(topo, df_min.iloc[0]['rotation'])
width, height = mhnc.shape
x_min = int(df_min.iloc[0][0])
y_min = int(df_min.iloc[0][1])
plt.figure(1, figsize=(20,15))
plt.subplot(221)
plt.title('Input Data')
plt.imshow(mhnc)
plt.subplot(222)
plt.imshow(topo_rotated[y_min:y_min+width,x_min:x_min+height])
plt.title("USGS Topography")
plt.subplot(223)
plt.plot(df_diff[2])
plt.ylim(0,2400)
plt.xlabel('DataFrame index value')
plt.ylabel('Sum of pixel-wise differences')
plt.title("Pixelwise Difference")
plt.subplot(224)
plt.plot(df_diff[2][23345000:23347000])
plt.ylim(0,2400)
plt.xlabel('DataFrame index value')
plt.ylabel('Sum of pixel-wise differences')
plt.title("Pixelwise Difference Zoomed")
plt.show()
```
| github_jupyter |
# Advanced analysis
The advanced analysis is performed after having reconstructed the PET image with the correct and complete $\mu$-map. The affine transformations (i.e., representing the rigid body alignment) of the two-stage registration are used again for aligning the concentric ring volumes of interest (VOI).
## Imports
```
# > get all the imports
import numpy as np
import os
import glob
from pathlib import Path
import matplotlib.pyplot as plt
%matplotlib inline
from niftypet import nipet
from niftypet import nimpa
# > import tools for ACR parameters,
# > templates generation and I/O
import acr_params as pars
import acr_tmplates as ast
import acr_ioaux as ioaux
```
## Initialisation
Initialise the scanner parameters and look-up tables, the path to the phantom data and design, and phantom constants, etc.
```
# > get all the constants and LUTs for the mMR scanner
mMRpars = nipet.get_mmrparams()
# > core path with ACR phantom inputs
cpth = Path('/sdata/ACR_data_design')
# > standard and exploratory output
outname = 'output_s'
# > get dictionary of constants, Cntd, for ACR phantom imaging
Cntd = pars.get_params(cpth)
# > update the dictionary of constants with I/O paths
Cntd = ioaux.get_paths(Cntd, mMRpars, cpth, outdir=outname)
```
## Qantitative PET reconstruction
Reconstruct the quantitative PET for iterations 4, 8 and 16 as given in `Cntd['itr_qnt2']`. Uses the complete $\mu$-map as saved at path `Cntd['out']['fmuf']`.
```
# > adjust the time frame as needed
time_frame = [0, 1800]
# > output file name
facr = 'ACR_QNT_t{}-{}'.format(time_frame[0], time_frame[1])
if not os.path.isfile(os.path.join(os.path.dirname(Cntd['fqnt']), facr+'.nii.gz')):
# > generate hardware mu-map for the phantom
muhdct = nipet.hdw_mumap(Cntd['datain'], [3,4], mMRpars, outpath=Cntd['opth'], use_stored=True)
# > run reconstruction
recon = nipet.mmrchain(
Cntd['datain'], # > all the input data in dictionary
mMRpars,
frames=['fluid', [0,1800]],
mu_h=muhdct,
mu_o=Cntd['out']['fmuf'],
itr=max(Cntd['itr_qnt2']),
store_itr=Cntd['itr_qnt2'], # > list of all iterations after which image is saved
recmod=3,
outpath = Cntd['opth'],
fout=facr,
store_img = True)
```
## Scaling up PET images to high resolution grid
The PET images are trimmed and upsampled to ~$0.5$ mm resolution grid to facilitate accurate and precise concentric ring VOI sampling.
```
# > TRIM/UPSCALE
# > get PET images for different iterations to be upsacled
fims = glob.glob(os.path.join(Cntd['opth'], 'PET', 'single-frame', facr+'*inrecon*'))
for fim in fims:
imu = nimpa.imtrimup(
fim,
refim=Cntd['fnacup'],
scale=Cntd['sclt'],
int_order=1,
fcomment_pfx=os.path.basename(fim).split('.')[0]+'_',
store_img=True)
```
### Plot upscaled PET images
Notice, the inceased noise with a greater number of iterations, but also greater contrast achieved, especially seen on the summed images below.
```
#> get upscaled images for 4 and 16 OSEM iterations
fqnt4 = glob.glob(os.path.join(Cntd['opth'], 'PET', 'single-frame', 'trimmed', facr+'*itr4*inrecon*'))[0]
qntim4 = nimpa.getnii(fqnt4)
fqnt16 = glob.glob(os.path.join(Cntd['opth'], 'PET', 'single-frame', 'trimmed', facr+'*itr16*inrecon*'))[0]
qntim16 = nimpa.getnii(fqnt16)
#> RODS
# > plot the NAC PET reconstruction and template
fig, axs = plt.subplots(2,2, figsize=(12, 10))
axs[0,0].imshow(np.sum(qntim4[90:240],axis=0), cmap='magma')
axs[0,0].set_axis_off()
axs[0,0].set_title('Summed rod section, 4 iters')
axs[0,1].imshow(qntim4[160,...], cmap='magma')
axs[0,1].set_axis_off()
axs[0,1].set_title('Rod section, 4 iters')
axs[1,0].imshow(np.sum(qntim16[90:240],axis=0), cmap='magma')
axs[1,0].set_axis_off()
axs[1,0].set_title('Summed rod section, 16 iters')
axs[1,1].imshow(qntim16[160,...], cmap='magma')
axs[1,1].set_axis_off()
axs[1,1].set_title('Rod section, 16 iters')
# FACEPLATE
# > plot the NAC PET reconstruction and template
fig, axs = plt.subplots(2,2, figsize=(12, 10))
axs[0,0].imshow(np.sum(qntim4[370:440],axis=0), cmap='magma')
axs[0,0].set_axis_off()
axs[0,0].set_title('Summed faceplate section, 4 iters')
axs[0,1].imshow(qntim4[400,...], cmap='magma')
axs[0,1].set_axis_off()
axs[0,1].set_title('Faceplate section, 4 iters')
axs[1,0].imshow(np.sum(qntim16[370:440],axis=0), cmap='magma')
axs[1,0].set_axis_off()
axs[1,0].set_title('Summed faceplate section, 16 iters')
axs[1,1].imshow(qntim16[400,...], cmap='magma')
axs[1,1].set_axis_off()
axs[1,1].set_title('Faceplate section, 16 iters')
#axs[1,1].imshow(np.sum(qntim4[],axis=0), cmap='magma')
#axs[1,1].set_axis_off()
#axs[1,1].set_title('Summed faceplate insert section')
```
## Generate the sampling templates
The templates are generated from high resolution PNG files and by repeating it in axial direction 3D NIfTI templates are generated.
```
# > refresh all the input data and intermediate output
Cntd = pars.get_params(cpth)
Cntd = ioaux.get_paths(Cntd, mMRpars, cpth, outdir=outname)
# SAMPLING TEMPLATES
#> get the templates
ast.create_sampl_res(Cntd)
ast.create_sampl(Cntd)
```
## Generate sampling VOIs
The concentric sampling VOIs are generate by aligning all the templates to the upsampled PET
```
#> create the VOIs by resampling the templates
vois = ioaux.sampling_masks(Cntd, use_stored=True)
```
## Visualisation of VOI sampling
The left column shows the summed resolution rods and insert parts of the ACR phantom. The right column shows the sampling concentric rings for the largest rods and the largest hot insert.
```
fig, axs = plt.subplots(2,2, figsize=(14, 12))
axs[0,0].imshow(np.sum(qntim16[90:240],axis=0), cmap='magma')
axs[0,0].set_axis_off()
axs[0,1].imshow(np.sum(qntim16[90:240],axis=0), cmap='magma')
axs[0,1].imshow(vois['fst_res'][150,...]*(vois['fst_res'][150,...]>=60), vmin=50, cmap='tab20', alpha=0.4)
axs[0,1].set_axis_off()
axs[1,0].imshow(np.sum(qntim16[370:440],axis=0), cmap='magma')
axs[1,0].set_axis_off()
axs[1,1].imshow(np.sum(qntim16[370:440],axis=0), cmap='magma')
axs[1,1].imshow(vois['fst_insrt'][400,...]*(vois['fst_insrt'][400,...]<20), vmin=0, cmap='tab20', alpha=0.4)
axs[1,1].set_axis_off()
```
| github_jupyter |
```
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from resnet_model import *
batch_size = 64
data_train = dset.MNIST('./', train=False, download=True,
transform=transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
transforms.Normalize(
(0.1307,), (0.3081,))
]))
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
test_loader = torch.utils.data.DataLoader(dset.MNIST('./', train=False, download=True,
transform=transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
examples = enumerate(train_loader)
batch_idx, (xs, ys) = next(examples)
fig = plt.figure()
for i in range(4):
plt.subplot(2,2,i+1)
plt.tight_layout()
plt.imshow(xs[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(ys[i]))
plt.xticks([])
plt.yticks([])
fig
import tqdm
from tqdm import tqdm
def train_model(model, train_loader, val_loader, loss, optimizer, scheduler, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
with tqdm(total=len(train_loader)) as progress_bar:
for i_step, (x, y) in enumerate(train_loader):
prediction = model(x)
loss_value = loss(prediction, y)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples_batch = torch.sum(indices == y)
correct_samples += torch.sum(indices == y)
total_samples_batch = y.shape[0]
total_samples += y.shape[0]
loss_accum += loss_value
progress_bar.update()
progress_bar.set_description(f'Train Loss at Batch {i_step}: {loss_value} , \
Accuracy is {float(correct_samples_batch)/total_samples_batch}')
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
scheduler.step()
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
val_accuracy = 0
correct = 0
total = 0
for i_step, (x, y) in enumerate(loader):
pred = model(x)
_, indices = torch.max(pred, 1)
correct += torch.sum(indices == y)
total += y.shape[0]
val_accuracy = float(correct)/total
return val_accuracy
resnet18 = ResNet([2,2,2,2], 3, 10, False)
resnet18.type(torch.FloatTensor)
parameters = resnet18.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(parameters, lr=0.01, momentum=0.9)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size = 1, gamma = 2)
loss_history, train_history, val_history = train_model(resnet18, train_loader, val_loader, loss, optimizer, scheduler, 5)
resnet50 = ResNet([3, 4, 6, 3], 3, 10, True)
print(list(resnet50.modules()))
```
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Load the historic player data
```
data = json.load(open('real-player.json','rb'))
df = pd.DataFrame(data['ratings'])
df = df.drop(['fuzz','abbrev_if_new_row'],1)#.set_index(['slug','season'])
df = df.set_index(['slug','season']).reset_index()
cols = list(df.columns[2:])
ratings = {}
for row in df.itertuples():
ratings[(row[1],row[2])] = list(row[3:])
data['bios']['abdulka01']
ratings[('jordami01',1985)]
# only use recent-ish players
from collections import defaultdict
player_year_rate = defaultdict(dict)
for i,r in ratings.items():
if data['bios'][i[0]]['bornYear'] < 1956:
continue
if i[1] > 2019:
continue
age= i[1]-data['bios'][i[0]]['bornYear']
player_year_rate[i[0]][age] = np.array(r)
# smooth their ratings
import scipy
SMOOTHING_STD = 1.5
play = player_year_rate['malonka01'] # greendr01 jamesle01 hardeja01 malonka01
minY = min(play.keys())
maxY = max(play.keys())
res = []
for i in range(minY,maxY+1):
#print(i)
#res.append(play.get(i,[np.nan for j in range(15)]))
res.append(play[i] if i in play else res[-1])
i = 8
plt.plot(range(minY,maxY+1),np.array(res)[:,i],label='orig')
plt.plot(range(minY,maxY+1),scipy.ndimage.gaussian_filter1d(np.array(res).astype(float),SMOOTHING_STD,mode='nearest',axis=0,truncate=10)[:,i],label='new')
plt.legend()
plt.title(cols[i])
play_year_rateSmooth = {}
for key,play in player_year_rate.items():
minY = min(play.keys())
maxY = max(play.keys())
res = []
for i in range(minY,maxY+1):
#res.append(play.get(i,[np.nan for j in range(15)]))
res.append(play[i] if i in play else res[-1])
res = np.array(res).astype(float)
reS = scipy.ndimage.gaussian_filter1d(res,SMOOTHING_STD,mode='nearest',axis=0,truncate=10)
p2 = {}
for idx,age in enumerate(range(minY,maxY+1)):
if age in play:
p2[age] = reS[idx]
play_year_rateSmooth[key] = p2
r1 = []
r2 = []
for play in play_year_rateSmooth.values():
for age,r in play.items():
if age-1 in play:
age2 = age-1
if age2 > 36:
continue
r1.append(np.log(play[age]) - np.log(play[age-1]))
r2.append(age2)
r1 = np.array(r1)
r2 = np.array(r2)
```
## Model development
```
age_res = []
for age in sorted(np.unique(r2)):
age_res.append(r1[r2==age].mean(0))
age_res = np.array(age_res)
for i in range(15):
plt.plot(sorted(np.unique(r2)),age_res[:,i],label=cols[i],c=plt.cm.tab20(i))
#plt.xlim(right=36)
plt.legend()
#plt.ylim(0.75,1.25)
import sklearn.linear_model as linear_model
TIMES_TO_FIT = 1
clf_models = []
for i in range(TIMES_TO_FIT):
clf = linear_model.RidgeCV(np.logspace(-5,5,11),cv=5)#SGDRegressor('epsilon_insensitive',alpha=1e-5,epsilon=0,max_iter=10000,tol=1e-9,eta0=1e-5)
clf.fit(np.repeat(r2,15)[:,None],r1.ravel())
score = clf.score(np.repeat(r2,15)[:,None],r1.ravel())
clf_models.append((score,i,clf))
best_model = sorted(clf_models)[-1]
clf = best_model[2]
print(best_model[0])
main_model = (clf.coef_[0] , clf.intercept_)
plt.plot(np.unique(r2),np.unique(r2)*main_model[0] +main_model[1])
plt.grid(True)
models = []
for i in range(r1.shape[1]):
clf_models = []
for j in range(TIMES_TO_FIT):
clf = linear_model.RidgeCV(np.logspace(-5,5,11),cv=5)#SGDRegressor('epsilon_insensitive',alpha=1e-5,epsilon=0,max_iter=10000,tol=1e-9,eta0=1e-5)
clf.fit(np.array(r2)[:,None],r1[:,i]-(main_model[0]*r2+main_model[1]))
score = clf.score(np.array(r2)[:,None],r1[:,i]-(main_model[0]*r2+main_model[1]))
clf_models.append((score,j,clf))
best_model = sorted(clf_models)[-1]
clf = best_model[2]
print(cols[i],best_model[0])
models.append((clf.coef_[0],clf.intercept_))
plt.style.use('seaborn-white')
for i in range(r1.shape[1]):
plt.plot(np.unique(r2),np.unique(r2)*models[i][0]+models[i][1],label=cols[i],c=plt.cm.tab20(i))
plt.legend()
#plt.xlim(19,34)
#plt.ylim(-4,4)
plt.grid(True)
means_expected = []
for i in range(r1.shape[1]):
means_expected.append((models[i][0]*r2 + models[i][1]) * (main_model[0]*r2+main_model[1]) )
# rank1 approximations of this would be really cool
# but sampling multivariate Gaussians seems... annoying?
removed_means = r1 - np.array(means_expected).T
plt.figure(figsize=(20,20))
i = 1
for age in sorted(np.unique(r2)):
if (r2 == age).sum() < 2:
continue
plt.subplot(4,6,i)
i += 1
covar = np.cov(removed_means[r2 == age],rowvar=False)
plt.imshow(covar)
plt.xticks(np.arange(15),cols,rotation=45)
plt.yticks(np.arange(15),cols)
plt.title('age={} max={:.1f}'.format(age,covar.max()))
plt.tight_layout(pad=0.1,h_pad=0)
plt.gcf().subplots_adjust(hspace=-0.6)
age_w = []
ages = sorted(np.unique(r2))
age_stds = []
for age in ages:
age_w.append((r2==age).sum())
age_stds.append(removed_means[r2==age].std(axis=0))
age_stds = np.array(age_stds)
age_w = np.array(age_w)
age_w = age_w/age_w.mean()
clf = linear_model.RidgeCV()#SGDRegressor(loss='epsilon_insensitive',alpha=0,epsilon=0)
clf.fit(np.repeat(ages,15)[:,None],age_stds.ravel(),sample_weight=np.repeat(age_w,15))
base_model = list(main_model) + [clf.coef_[0],clf.intercept_]
plt.plot(np.unique(r2),np.unique(r2)*clf.coef_[0] +clf.intercept_,lw=3)
std_models = []
for i in range(15):
clf = linear_model.RidgeCV()#SGDRegressor(loss='epsilon_insensitive',alpha=0,epsilon=0)
clf.fit(np.array(ages)[:,None],np.maximum(0,age_stds[:,i]-(np.array(ages)*base_model[2] + base_model[3])),sample_weight = age_w)
std_models.append((clf.coef_[0],clf.intercept_))
plt.style.use('seaborn-white')
for i in range(r1.shape[1]):
plt.plot(np.unique(r2),np.unique(r2)*std_models[i][0] + std_models[i][1],label=cols[i],c=plt.cm.tab20(i),lw=3)
plt.legend()
plt.xlim(19,34)
plt.grid(True)
models
clf.intercept_
dat_print = {cols[i]:tuple(np.round(row,4)) for i,row in enumerate(np.hstack([models,std_models]))}
print('{} {},'.format("base",list(np.round(base_model,4))))
for k,v in dat_print.items():
if k == 'hgt':continue
print('{}: {},'.format(k,list(v)))
np.quantile(means_expected,0.99,axis=0).mean(),np.quantile(means_expected,0.01,axis=0).mean()
np.quantile(r1,0.99,axis=0).mean(),np.quantile(r1,0.01,axis=0).mean()
```
## Model Rookies
```
youth = []
names = []
positions = []
for k,p in data['bios'].items():
if 'bornYear' not in p or p['bornYear'] is None:
continue
yr = p['draftYear']
age = yr-p['bornYear']
if yr<2020 and yr >= 2000 and (k,yr+1) in ratings and age < 23 and p['draftPick'] < 45:
youth.append([age] + ratings[(k,yr+1)])
names.append(k)
positions.append(p['pos'])
youth = np.array(youth)
_ = plt.hist((youth/youth.mean(0)).ravel(),50)
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
clf_pca = PCA(whiten =False)#TSNE(perplexity=55)
emb = clf_pca.fit_transform(youth[:,1:].astype(np.float32))
pos_set = ['PG','G','SG',"GF",'SF','F','PF','FC',"C"]
plt.scatter(emb[:,0],emb[:,1],c=[pos_set.index(_) for _ in positions],cmap='RdBu')
for c,v in zip(cols,np.round(clf_pca.mean_,1)):
print(c,':',v,',')
clf_pca.explained_variance_ratio_
COMP =3
hgt = youth[:,1+cols.index('hgt')]
X_hgt = hgt[:,None]# np.vstack([hgt,hgt**2]).T
pred_res = []
hgt_models = []
for i in range(COMP):
clf = linear_model.RidgeCV(cv=3,alphas=np.logspace(-5,3,9))
clf.fit(X_hgt,emb[:,i])
clf_s = clf.score( X_hgt,emb[:,i])
pred_res.append(clf.predict(X_hgt))
print(clf_s)
hgt_models.append(list(clf.coef_) + [clf.intercept_])
pred_res = np.array(pred_res).T
np.round(hgt_models,2)
ovr_weights = {
'hgt': 0.21 ,
'stre':0.0903,
'spd':0.143,
'jmp':0.0436,
'endu':0.0346,
'ins':0.01,
'dnk':0.0312,
'ft':0.0291,
'tp':0.107,
'oiq':0.0895 ,
'diq':0.0760 ,
'drb':0.105 ,
'pss':0.0785 ,
'fg':0.01,
'reb':0.0448,}
ovr_v = np.array([ovr_weights[cols[i]] for i in range(len(cols))])
clf_pca.components_[:COMP,:]
ADD_VAR = np.array([15.4,17.7,9.3])*np.random.randn(X_hgt.shape[0],COMP)
#ADD_VAR = np.array([12,13,7])*np.random.laplace(size=(X_hgt.shape[0],COMP))
MUL_VAR = np.random.uniform(1-0.23,1+0.23,size=(X_hgt.shape[0],15))
pred_vec = ((ADD_VAR+pred_res) @ clf_pca.components_[:COMP,:]) + clf_pca.mean_
pred_vec *= MUL_VAR
abs(pred_vec - youth[:,1:]).mean(0)
_ = plt.hist((youth[:,1:]*ovr_v).sum(1)-6,20,alpha=0.5,density=True)
_ = plt.hist((ovr_v*pred_vec).sum(1)-6,20,alpha=0.5,density=True)
print(youth[:,1:].mean(1).std(),pred_vec.mean(1).std())
plt.subplot(1,2,1)
plt.imshow(np.cov(youth[:,1:],rowvar=False),vmin=-130,vmax=130,cmap='RdBu')
plt.title('real players')
plt.xticks(np.arange(15),cols,rotation=45)
plt.yticks(np.arange(15),cols)
plt.subplot(1,2,2)
plt.imshow(np.cov(pred_vec,rowvar=False),vmin=-130,vmax=130,cmap='RdBu')
plt.title('generated')
plt.xticks(np.arange(15),cols,rotation=45)
_ = plt.yticks(np.arange(15),cols)
youth[:,1:].mean(0)-pred_vec.mean(0)
def eval_f(params):
#np.random.seed(43)
res = []
for i in range(35):
#np.random.seed(42+i)
#ADD_VAR = np.exp(params[:COMP])*np.random.laplace(size=(X_hgt.shape[0],COMP))
ADD_VAR = np.exp(params[:COMP])*np.random.randn(X_hgt.shape[0],COMP)
MUL_VAR = np.random.uniform(1-np.exp(params[COMP]),1+np.exp(params[COMP+1]),size=(X_hgt.shape[0],15))
pred_vec = ((ADD_VAR+pred_res) @ clf_pca.components_[:COMP,:]) + clf_pca.mean_
pred_vec *= MUL_VAR
N = youth.shape[0]
cov_err = ((np.cov(youth[:,1:],rowvar=False)-np.cov(pred_vec,rowvar=False))**2).mean()
mean_err = ((np.array(sorted((ovr_v*youth[:,1:]).sum(1)))[N//2:]-np.array(sorted((ovr_v*pred_vec).sum(1)))[N//2:])**2).mean()
mean_err2 = ((youth[:,1:].mean(0)-pred_vec.mean(0))**2).mean()
#print(np.sqrt(cov_err),50*mean_err , 10*mean_err2)
res.append( (cov_err)*mean_err + 5*mean_err2)
return np.mean(sorted(res)[-10:])
eval_f(np.log([15.61561797, 17.75709166, 9.43354365, 0.23711597, 0.23264592]))
import cma
es = cma.CMAEvolutionStrategy(np.log([15.61561797, 17.75709166, 9.43354365, 0.23711597, 0.23264592]),0.01,{'popsize':10.5})
es.optimize(eval_f)
np.exp(es.mean)
```
| github_jupyter |
## Mounting and redirecting
```
#Drive mounting
from google.colab import drive
drive.mount('/content/drive/')
#redirecting to the desired path
import os
os.chdir("/content/drive/My Drive/Colab Notebooks")
%cd dataset/
!ls
#importing and concatinating all .csv files
import pandas as pd
import numpy as np
import glob
path = r'/content/drive/My Drive/Colab Notebooks/dataset'
all_files = glob.glob(path + "/*.csv")
df_files = (pd.read_csv(f) for f in all_files)
df = pd.concat(df_files, ignore_index=True)
# visualizing the data
df
# specifying the features
df.drop(columns=["COMMENT_ID","AUTHOR","DATE"],inplace=True)
```
### Data-Preprocessing
```
#visualizing 4th row comment
df["CONTENT"][4]
import html
df["CONTENT"]=df["CONTENT"].apply(html.unescape)
df["CONTENT"]=df["CONTENT"].str.replace("\ufeff","")
df["CONTENT"][4]
#trying to resolve spam link issues
df["CONTENT"]=df["CONTENT"].str.replace("(<a.+>)","htmllink")
df[df["CONTENT"].str.contains("<.+>")]["CONTENT"]
df["CONTENT"]=df["CONTENT"].str.replace("<.+>","")
df["CONTENT"]=df["CONTENT"].str.replace("\'","")
df["CONTENT"]=df["CONTENT"].str.lower()
df[df["CONTENT"].str.contains("\.com|watch\?")]
df["CONTENT"][17]
#cleaning spam comments
df["CONTENT"]=df["CONTENT"].str.replace(r"\S*\.com\S*|\S*watch\?\S*","htmllink")
df["CONTENT"]=df["CONTENT"].str.replace("\W"," ")
#visualizing 14th row comment after data cleaning
df["CONTENT"][14]
#checking comment no. 17 if spammed link is removed or not
df["CONTENT"][17]
df
```
## Model Creation
```
#normalization is used to change the values of numeric columns in the dataset to use a common scale, without distorting differences in the ranges of values or losing information.
df["CLASS"].value_counts(normalize=True)
vocab=[]
for comment in df["CONTENT"]:
for word in comment.split():
vocab.append(word)
#no. of different words in the dataset
vocabulary=list(set(vocab))
len(vocabulary)
# Create a column for each of the unique word in our vocabulary inorder to get the count of words
for word in vocabulary:
df[word]=0
df.head()
# looping through data frame and counting words
for index,value in enumerate(df["CONTENT"]):
for l in value.split():
df[l][index]+=1
df.sample(10)
#Total number of words in each class
df.groupby("CLASS").sum().sum(axis=1)
# Assign variables to all values required in calculation
p_ham=0.47604
p_spam=0.52396
n_spam=df[df["CLASS"]==1].drop(columns=["CONTENT","CLASS"]).sum().sum()
n_ham=df[df["CLASS"]==0].drop(columns=["CONTENT","CLASS"]).sum().sum()
n_vocabulary=len(vocabulary)
# Slicing dataframe for each class
df_sspam=df[df["CLASS"]==1]
df_hham=df[df["CLASS"]==0]
parameters_spam = {unique_word:0 for unique_word in vocabulary}
parameters_ham = {unique_word:0 for unique_word in vocabulary}
for word in vocabulary:
n_word_given_spam = df_sspam[word].sum() # spam_messages already defined in a cell above
p_word_given_spam = (n_word_given_spam + 1) / (n_spam + 1*n_vocabulary)
parameters_spam[word] = p_word_given_spam
n_word_given_ham = df_hham[word].sum() # ham_messages already defined in a cell above
p_word_given_ham = (n_word_given_ham + 1) / (n_ham + 1*n_vocabulary)
parameters_ham[word] = p_word_given_ham
```
## Model Testing
```
# Creating the model classifier
def classifier(string):
message=html.unescape(string)
message=string.replace("\ufeff","")
message=string.replace("(<a.+>)","htmllink")
message=string.replace("\'|<.+>","")
message=string.replace("\S*\.com\S*|\S*watch\?\S*","htmllink")
message=string.replace("\W"," ").lower()
p_string_s=1
p_string_h=1
for word in message.split():
if word in parameters_spam:
p_string_s*=parameters_spam[word]
p_string_h*=parameters_ham[word]
if (p_string_s*p_spam)>(p_string_h*p_ham):
return(1)
elif (p_string_s*p_spam)<(p_string_h*p_ham):
return(0)
else:
return(-1)
# Reading the dataframe for testing model
df_artist=pd.read_csv("Youtube02-KatyPerry.csv")
df_artist.sample(4)
df_artist["Pred_Class"]=df_artist["CONTENT"].apply(classifier)
# Checking model accuracy
correct_predictions=0
total_rows=0
for row in df_artist.iterrows():
row=row[1]
total_rows+=1
if row["CLASS"]==row["Pred_Class"]:
correct_predictions+=1
accuracy=correct_predictions/total_rows
print("accuracy=",accuracy)
```
## Conclusion
0 = Not Harmful </br>
1 = Harmful
```
# Checking result1
classifier("This song gives me goosebumps!!")
# Checking result2
classifier("Please subscribe to my channel as I'm approaching 1M subscribers")
# Checking result3
classifier("If you want to be a mastercoder, consider buying my course for 50% off at www.buymycourse.com")
# Checking result4
classifier("she is sings so nice AF")
# Checking result5
classifier("click on this ID and set a chance to 1 lakh INR")
```
| github_jupyter |
# Report - Part 1
This script generates eight best machine learning models
## Import Libraries
```
import os
import pickle
import warnings
import pandas as pd
# data preprocessing
import spacy
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from spacy.lang.en.stop_words import STOP_WORDS
# feature extraction
from keras.utils import np_utils
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# machine learning
from sklearn.model_selection import train_test_split
from sklearn.ensemble import (AdaBoostClassifier, RandomForestClassifier,
VotingClassifier)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import balanced_accuracy_score, confusion_matrix, f1_score
from sklearn.metrics.classification import log_loss
from sklearn.model_selection import (GridSearchCV, StratifiedKFold,
learning_curve, train_test_split)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
warnings.filterwarnings("ignore")
```
## Import Data
```
data = pd.read_csv("../data/dataset/training_variants.zip")
print("Number of data points : ", data.shape[0])
print("Number of features : ", data.shape[1])
print("Features : ", data.columns.values)
data_text = pd.read_csv(
"../data/dataset/training_text.zip",
sep="\|\|",
engine="python",
names=["ID", "TEXT"],
skiprows=1,
)
print("Number of data points : ", data_text.shape[0])
print("Number of features : ", data_text.shape[1])
print("Features : ", data_text.columns.values)
```
## Data Preprocessing
```
tokenizer = RegexpTokenizer("\w+'?\w+|\w+")
stop_words = stopwords.words("english")
stop_words = set(stop_words).union(STOP_WORDS)
nlp = spacy.load("en", disable=["parser", "tagger", "ner"])
def make_token(x):
""" Tokenize the text (remove punctuations and spaces)"""
return tokenizer.tokenize(str(x))
def remove_stopwords(x):
return [token for token in x if token not in final_stop_words]
def lemmatization(x):
lemma_result = []
for words in x:
doc = nlp(words)
for token in doc:
lemma_result.append(token.lemma_)
return lemma_result
def pipeline(total_text, index, column):
""" A pipeline to process text data """
if type(total_text) is str:
total_text = total_text.lower()
total_text = make_token(total_text)
total_text = remove_stopwords(total_text)
total_text = lemmatization(total_text)
string = " ".join(total_text)
data_text[column][index] = string
for index, row in data_text.iterrows():
if type(row["TEXT"]) is str:
pipeline(row["TEXT"], index, "TEXT")
else:
print("there is no text description for id:", index)
### merge genes, variations and text data by ID
result = pd.merge(data, data_text, on="ID", how="left")
result.loc[result["TEXT"].isnull(), "TEXT"] = result["Gene"] + " " + result["Variation"]
result.Gene = result.Gene.str.replace("\s+", "_")
result.Variation = result.Variation.str.replace("\s+", "_")
## write to pickle
pd.to_pickle(result, "result_non_split.pkl")
```
## Feature Extraction
```
maxFeats = 10000
tfidf = TfidfVectorizer(
min_df=5,
max_features=maxFeats,
ngram_range=(1, 2),
analyzer="word",
stop_words="english",
token_pattern=r"\w+",
)
tfidf.fit(result["TEXT"])
cvec = CountVectorizer(
min_df=5,
ngram_range=(1, 2),
max_features=maxFeats,
analyzer="word",
stop_words="english",
token_pattern=r"\w+",
)
cvec.fit(result["TEXT"])
# try n_components between 360-390
svdT = TruncatedSVD(n_components=390, n_iter=5)
svdTFit = svdT.fit_transform(tfidf.transform(result["TEXT"]))
def buildFeatures(df):
"""This is a function to extract features, df argument should be
a pandas dataframe with only Gene, Variation, and TEXT columns"""
temp = df.copy()
print("Encoding...")
temp = pd.get_dummies(temp, columns=["Gene", "Variation"], drop_first=True)
print("TFIDF...")
temp_tfidf = tfidf.transform(temp["TEXT"])
print("Count Vecs...")
temp_cvec = cvec.transform(temp["TEXT"])
print("Latent Semantic Analysis Cols...")
del temp["TEXT"]
tempc = list(temp.columns)
temp_lsa_tfidf = svdT.transform(temp_tfidf)
temp_lsa_cvec = svdT.transform(temp_cvec)
for i in range(np.shape(temp_lsa_tfidf)[1]):
tempc.append("lsa_t" + str(i + 1))
for i in range(np.shape(temp_lsa_cvec)[1]):
tempc.append("lsa_c" + str(i + 1))
temp = pd.concat(
[
temp,
pd.DataFrame(temp_lsa_tfidf, index=temp.index),
pd.DataFrame(temp_lsa_cvec, index=temp.index),
],
axis=1,
)
return temp, tempc
trainDf, traincol = buildFeatures(result[["Gene", "Variation", "TEXT"]])
trainDf.columns = traincol
pd.to_pickle(trainDf, "trainDf.pkl")
```
## Training Classifiers and Tuning Hyperparameter
```
# result = pd.read_pickle("result_non_split.pkl")
labels = result.Class - 1
# trainDf = pd.read_pickle("trainDf.pkl")
# for cross Validation
kfold = StratifiedKFold(n_splits=5)
# split data into training data and testing data
X_train, X_test, y_train, y_test = train_test_split(
trainDf, labels, test_size=0.2, random_state=5, stratify=labels
)
# encode labels
le = LabelEncoder()
y_train = le.fit_transform(y_train)
y_test = le.transform(y_test)
encoded_test_y = np_utils.to_categorical((le.inverse_transform(y_test)))
### 1. Support Vector Machines
svr = SVC(probability=True)
# Hyperparameter tuning - Grid search cross validation
svr_CV = GridSearchCV(
svr,
param_grid={
"C": [0.1, 1, 10, 100],
"gamma": [1, 0.1, 0.01, 0.001],
"kernel": ["poly", "rbf", "sigmoid", "linear"],
"tol": [1e-4],
},
cv=kfold,
verbose=False,
n_jobs=-1,
)
svr_CV.fit(X_train, y_train)
svr_CV_best = svr_CV.best_estimator_
print("Best score: %0.3f" % svr_CV.best_score_)
print("Best parameters set:", svr_CV.best_params_)
# save the model
filename = "svr_CV_best.sav"
pickle.dump(svr_CV_best, open(filename, "wb"))
## load the model
# with (open(filename, "rb")) as openfile:
# svr_CV_best = pickle.load(openfile)
### 2. Logistic Regression
logreg = LogisticRegression(multi_class="multinomial")
# Hyperparameter tuning - Grid search cross validation
logreg_CV = GridSearchCV(
estimator=logreg,
param_grid={"C": np.logspace(-3, 3, 7), "penalty": ["l1", "l2"]},
cv=kfold,
verbose=False,
)
logreg_CV.fit(X_train, y_train)
logreg_CV_best = logreg_CV.best_estimator_
print("Best score: %0.3f" % logreg_CV.best_score_)
print("Best parameters set:", logreg_CV.best_params_)
# save the model
filename = "logreg_CV_best.sav"
pickle.dump(logreg_CV_best, open(filename, "wb"))
### 3. k-Nearest Neighbors
knn = KNeighborsClassifier()
# Hyperparameter tuning - Grid search cross validation
param_grid = {"n_neighbors": range(2, 10)}
knn_CV = GridSearchCV(
estimator=knn, param_grid=param_grid, cv=kfold, verbose=False
).fit(X_train, y_train)
knn_CV_best = knn_CV.best_estimator_
print("Best score: %0.3f" % knn_CV.best_score_)
print("Best parameters set:", knn_CV.best_params_)
# save the model
filename = "knn_CV_best.sav"
pickle.dump(knn_CV_best, open(filename, "wb"))
### 4. Random Forest
random_forest = RandomForestClassifier()
param_grid = {
"bootstrap": [True, False],
"max_depth": [5, 8, 10, 20, 40, 50, 60, 80, 100],
"max_features": ["auto", "sqrt"],
"min_samples_leaf": [1, 2, 4, 10, 20, 30, 40],
"min_samples_split": [2, 5, 10],
"n_estimators": [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000],
}
random_forest_CV = GridSearchCV(
estimator=random_forest, param_grid=param_grid, cv=kfold, verbose=False, n_jobs=-1
)
random_forest_CV.fit(X_train, y_train)
random_forest_CV_best = random_forest_CV.best_estimator_
print("Best score: %0.3f" % random_forest_CV.best_score_)
print("Best parameters set:", random_forest_CV.best_params_)
# save the model
filename = "random_forest_CV_best.sav"
pickle.dump(random_forest_CV_best, open(filename, "wb"))
### 5. Adaboost
param_grid = {
"base_estimator__max_depth": [5, 10, 20, 50, 100, 150, 200],
"n_estimators": [100, 500, 1000, 1500, 2000],
"learning_rate": [0.0001, 0.001, 0.01, 0.1, 1.0],
"algorithm": ["SAMME", "SAMME.R"],
}
Ada_Boost = AdaBoostClassifier(DecisionTreeClassifier())
Ada_Boost_CV = GridSearchCV(
estimator=Ada_Boost, param_grid=param_grid, cv=kfold, verbose=False, n_jobs=10
)
Ada_Boost_CV.fit(X_train, y_train)
Ada_Boost_CV_best = Ada_Boost_CV.best_estimator_
print("Best score: %0.3f" % Ada_Boost_CV.best_score_)
print("Best parameters set:", Ada_Boost_CV.best_params_)
# save the model
filename = "Ada_Boost_CV_best.sav"
pickle.dump(Ada_Boost_CV_best, open(filename, "wb"))
### 6. XGBoost
xgb_clf = xgb.XGBClassifier(objective="multi:softprob")
parameters = {
"n_estimators": [200, 300, 400],
"learning_rate": [0.001, 0.003, 0.005, 0.006, 0.01],
"max_depth": [4, 5, 6],
}
xgb_clf_cv = GridSearchCV(
estimator=xgb_clf, param_grid=parameters, n_jobs=-1, cv=kfold
).fit(X_train, y_train)
xgb_clf_cv_best = xgb_clf_cv.best_estimator_
print("Best score: %0.3f" % xgb_clf_cv.best_score_)
print("Best parameters set:", xgb_clf_cv.best_params_)
# Save the model
filename = "xgb_clf_cv_best.sav"
pickle.dump(xgb_clf_cv_best, open(filename, "wb"))
### 7. MLPClassifier
mlp = MLPClassifier()
param_grid = {
"hidden_layer_sizes": [i for i in range(5, 25, 5)],
"solver": ["sgd", "adam", "lbfgs"],
"learning_rate": ["constant", "adaptive"],
"max_iter": [500, 1000, 1200, 1400, 1600, 1800, 2000],
"alpha": [10.0 ** (-i) for i in range(-3, 6)],
"activation": ["tanh", "relu"],
}
mlp_GS = GridSearchCV(mlp, param_grid=param_grid, n_jobs=-1, cv=kfold, verbose=False)
mlp_GS.fit(X_train, y_train)
mlp_GS_best = mlp_GS.best_estimator_
print("Best score: %0.3f" % mlp_GS.best_score_)
print("Best parameters set:", mlp_GS.best_params_)
# save the model
filename = "mlp_GS_best.sav"
pickle.dump(mlp_GS_best, open(filename, "wb"))
### 8. Voting Classifier
Voting_ens = VotingClassifier(
estimators=[
("log", logreg_CV_best),
("rf", random_forest_CV_best),
("knn", knn_CV_best),
("svm", svr_CV_best),
],
n_jobs=-1,
voting="soft",
)
Voting_ens.fit(X_train, y_train)
# save the model
filename = "Voting_ens.sav"
pickle.dump(Voting_ens, open(filename, "wb"))
```
| github_jupyter |
# Diagonal permutation
Our goal in this notebook is
1. to create a distance matrix that reflects the distances between particular quarters of the Billboard Hot 100, 1960-2010, and measure Foote novelty on that matrix
2. to create a null model that will allow us to assess the significance of those Foote novelty measurements.
The underlying dataset is borrowed from Mauch et al., "The Evolution of Popular Music," but the methods we develop here can, we hope, be adapted to other domains.
This notebook was written up by Ted Underwood, in response to an insight about the appropriate null model suggested by Yuancheng Zhu.
We begin by reading in the data, which consists of principal components of a topic model that identifies "harmonic and timbral topics" in the music. The appropriateness of that dimension-reduction is not our central concern here; we're interested in what happens _after_ you've got points on a timeline characterized in some kind of dimension space.
```
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import csv, random
import numpy as np
from scipy import spatial
pcafields = ['PC' + str(x) for x in range(1,15)]
# Here we just create a list of strings that will
# correspond to field names in the data provided
# by Mauch et al.
pca = list()
with open('nb/quarterlypca.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
newpcarow = []
for field in pcafields:
newpcarow.append(float(row[field]))
pca.append(newpcarow)
pca = np.array(pca)
print(pca.shape)
```
Now we have an array of 200 observations, each of which is characterized by 14 variables. Let's define a function to create a distance matrix by comparing each observation against all the others.
```
def distance_matrix(pca):
observations, dimensions = pca.shape
distmat = np.zeros((observations, observations))
for i in range(observations):
for j in range(observations):
dist = spatial.distance.cosine(pca[i], pca[j])
distmat[i, j] = dist
return distmat
d = distance_matrix(pca)
plt.rcParams["figure.figsize"] = [9.0, 6.0]
plt.matshow(d, origin = 'lower', cmap = plt.cm.YlOrRd, extent = [1960, 2010, 1960, 2010])
plt.show()
```
So far so good; that closely resembles the distance matrix seen in Mauch et al. Now let's calculate Foote novelties. There are two parts to this process. Calculating Foote novelties is done by sliding a smaller matrix along the diagonal of the distance matrix and then multiplying elementwise.
So first we have to create the smaller matrix, using the function make_foote. Then we pass that as a parameter to the function foote_novelty. Passing matrices of different size will calculate different windows of similarity. Below we define these two functions, and then calculate Foote novelties for a window with a five-year half-width.
```
def make_foote(quart):
tophalf = [-1] * quart + [1] * quart
bottomhalf = [1] * quart + [-1] * quart
foote = list()
for i in range(quart):
foote.append(tophalf)
for i in range(quart):
foote.append(bottomhalf)
foote = np.array(foote)
return foote
foote5 = make_foote(20)
# This gives us a Foote matrix with a five-year half-width.
# 5 becomes 20 because the underlying dataset has four
# "quarters" of data in each year.
def foote_novelty(distmat, foote):
axis1, axis2 = distmat.shape
assert axis1 == axis2
distsize = axis1
axis1, axis2 = foote.shape
assert axis1 == axis2
halfwidth = axis1 / 2
novelties = []
for i in range(distsize):
start = int(i - halfwidth)
end = int(i + halfwidth)
if start < 0 or end > (distsize - 1):
novelties.append(0)
else:
novelties.append(np.sum(foote * distmat[start: end, start: end]))
return novelties
def getyears():
years = []
for i in range(200):
years.append(1960 + i*0.25)
return years
years = getyears()
novelties = foote_novelty(d, foote5)
plt.plot(years, novelties)
plt.show()
print("Max novelty for a five-year half-width: " + str(np.max(novelties)))
```
## Testing significance
Okay, now we have functions that can test Foote novelty in a distance matrix. But how do we know whether the apparent peaks and troughs in the plot above represent statistically significant variation?
We need a "null model": a way of producing distance matrices that represent a random version of our data. On the other hand, we want to get the _right kind_ of randomness. Foote novelty is sensitive to the distribution of values relative to the central diagonal timeline. So if we produce a "random model" where those values are evenly distributed, for instance by randomizing the underlying data and then calculating a distance matrix on it ...
```
randomized = np.array(pca)
np.random.shuffle(randomized)
randdist = distance_matrix(randomized)
plt.matshow(randdist, origin = 'lower', cmap = plt.cm.YlOrRd, extent = [1960, 2010, 1960, 2010])
plt.show()
```
That is far from an apples-to-apples null model. The problem is that the original data was _sequential,_ so distances between nearby points were usually smaller than distances between remote ones. That created the central "yellow path" running from lower left to upper right, following the diagonal timeline of quarters compared-to-themselves.
We need a better null model. The one below relies on a suggestion from Yuancheng Zhu, which was to permute values of the original distance matrix, and do it only _within_ diagonals. That way comparisons across a distance of (say) two quarters are permuted only with other two-quarter comparisons. I've added a small twist, which is to try to preserve the same underlying permutation for every diagonal (as far as possible), keying it to the x or y value for each point. That way vertically and horizontally-adjacent "pixels" of the matrix retain the same kind of "cross-hatched" correlation with each other that we saw in the original matrix. It's not perfect, but it's a reasonable approximation of a dataset where change is sequential, but randomly distributed.
```
def diagonal_permute(d):
newmat = np.zeros((200, 200))
# We create one randomly-permuted list of integers called "translate"
# that is going to be used for the whole matrix.
translate = [i for i in range(200)]
random.shuffle(translate)
# Because distances matrices are symmetrical, we're going to be doing
# two diagonals at once each time. We only need one set of values
# (because symmetrical) but we need two sets of indices in the original
# matrix so we know where to put the values back when we're done permuting
# them.
for i in range(0, 200):
indices1 = []
indices2 = []
values = []
for x in range(200):
y1 = x + i
y2 = x - i
if y1 >= 0 and y1 < 200:
values.append(d[x, y1])
indices1.append((x, y1))
if y2 >= 0 and y2 < 200:
indices2.append((x, y2))
# Okay, for each diagonal, we permute the values.
# We'll store the permuted values in newvalues.
# We also check to see how many values we have,
# so we can randomly select values if needed.
newvalues = []
lenvals = len(values)
vallist = [i for i in range(lenvals)]
for indexes, value in zip(indices1, values):
x, y = indexes
xposition = translate[x]
yposition = translate[y]
# We're going to key the randomization to the x, y
# values for each point, insofar as that's possible.
# Doing this will ensure that specific horizontal and
# vertical lines preserve the dependence relations in
# the original matrix.
# But the way we're doing this is to use the permuted
# x (or y) values to select an index in our list of
# values in the present diagonal, and that's only possible
# if the list is long enough to permit it. So we check:
if xposition < 0 and yposition < 0:
position = random.choice(vallist)
elif xposition >= lenvals and yposition >= lenvals:
position = random.choice(vallist)
elif xposition < 0:
position = yposition
elif yposition < 0:
position = xposition
elif xposition >= lenvals:
position = yposition
elif yposition >= lenvals:
position = xposition
else:
position = random.choice([xposition, yposition])
# If either x or y could be used as an index, we
# select randomly.
# Whatever index was chosen, we use it to select a value
# from our diagonal.
newvalues.append(values[position])
values = newvalues
# Now we lay down (both versions of) the diagonal in the
# new matrix.
for idxtuple1, idxtuple2, value in zip(indices1, indices2, values):
x, y = idxtuple1
newmat[x, y] = value
x, y = idxtuple2
newmat[x, y] = value
return newmat
newmat = diagonal_permute(d)
plt.matshow(newmat, origin = 'lower', cmap = plt.cm.YlOrRd, extent = [1960, 2010, 1960, 2010])
plt.show()
```
What if we now try assessing foote novelties on this randomized matrix? What maximum or minimum value will we get?
```
novelties = foote_novelty(newmat, foote5)
years = getyears()
plt.plot(years, novelties)
plt.show()
print("Max novelty for five-year half-width:" + str(np.max(novelties)))
def zeroless(sequence):
newseq = []
for element in sequence:
if element > 0.01:
newseq.append(element)
return newseq
print("Min novelty for five-year half-width:" + str(np.min(zeroless(novelties))))
```
By repeatedly running that test, we can assess the likely range of random variation. It turns out that there are only two "peaks" in the dataset that are clearly and consistently p < 0.05: one in the early eighties, and one in the earl nineties. The _slowing_ of change at the end of the nineties is also statistically significant.
```
def permute_test(distmatrix, yrwidth):
footematrix = make_foote(4 * yrwidth)
actual_novelties = foote_novelty(distmatrix, footematrix)
permuted_peaks = []
permuted_troughs = []
for i in range(100):
randdist = diagonal_permute(distmatrix)
nov = foote_novelty(randdist, footematrix)
nov = zeroless(nov)
permuted_peaks.append(np.max(nov))
permuted_troughs.append(np.min(nov))
permuted_peaks.sort(reverse = True)
permuted_troughs.sort(reverse = True)
threshold05 = permuted_peaks[4]
threshold01 = permuted_peaks[0]
threshold95 = permuted_troughs[94]
threshold99 = permuted_troughs[99]
print(threshold01)
print(threshold99)
significance = np.ones(len(actual_novelties))
for idx, novelty in enumerate(actual_novelties):
if novelty > threshold05 or novelty < threshold95:
significance[idx] = 0.049
if novelty > threshold01 or novelty < threshold99:
significance[idx] = 0.009
return actual_novelties, significance, threshold01, threshold05, threshold95, threshold99
def colored_segments(novelties, significance):
x = []
y = []
t = []
idx = 0
for nov, sig in zip(novelties, significance):
if nov > 1:
x.append(idx/4 + 1960)
y.append(nov)
t.append(sig)
idx += 1
x = np.array(x)
y = np.array(y)
t = np.array(t)
points = np.array([x,y]).transpose().reshape(-1,1,2)
segs = np.concatenate([points[:-1],points[1:]],axis=1)
lc = LineCollection(segs, cmap=plt.get_cmap('jet'))
lc.set_array(t)
return lc, x, y
novelties, significance, threshold01, threshold05, threshold95, threshold99 = permute_test(d, 5)
years = []
for i in range(200):
years.append(1960 + i*0.25)
plt.plot(years, novelties)
startpoint = years[0]
endpoint = years[199]
plt.hlines(threshold05, startpoint, endpoint, 'r', linewidth = 3)
plt.hlines(threshold95, startpoint, endpoint, 'r', linewidth = 3)
plt.show()
lc, x, y = colored_segments(novelties, significance)
plt.gca().add_collection(lc) # add the collection to the plot
plt.xlim(1960, 2010) # line collections don't auto-scale the plot
plt.ylim(y.min(), y.max())
plt.show()
```
## Visualization
Neither of the methods used above are terribly good as visualizations, so let's come up with a slightly better version: getting rid of the misleading "edges" and overplotting points to indicate the number of significant observations in particular periods.
```
def zeroless_seq(thefilter, filtereda, filteredb):
thefilter = np.array(thefilter)
filtereda = np.array(filtereda)
filteredb = np.array(filteredb)
filtereda = filtereda[thefilter > 0]
filteredb = filteredb[thefilter > 0]
thefilter = thefilter[thefilter > 0]
return thefilter, filtereda, filteredb
plt.clf()
plt.axis([1960, 2010, 45, 325])
novelties, significance, threshold01, threshold05, threshold95, threshold99 = permute_test(d, 5)
novelties, years, significance = zeroless_seq(novelties, getyears(), significance)
yplot = novelties[significance < 0.05]
xplot = years[significance < 0.05]
plt.scatter(xplot, yplot, c = 'red')
plt.plot(years, novelties)
years = getyears()
startpoint = years[0]
endpoint = years[199]
plt.hlines(threshold05, startpoint, endpoint, 'r', linewidth = 3)
plt.hlines(threshold95, startpoint, endpoint, 'r', linewidth = 3)
plt.show()
```
## Effect size
What about the effect size? Foote novelty is not really a direct measurement of the pace of change.
One way to measure it is, to accept the periods defined by the visualization above, and compare change across each of those periods.
So, for instance, the significant points in the second peak range from 1990 to 1994, and the lowest trough is roughly 2001 to 2005. We can divide each of those periods in half, and compare the first half to the second half. It looks like Mauch et al. are roughly right about effect size: it's a sixfold difference.
```
def pacechange(startdate, enddate, pca):
years = getyears()
startidx = years.index(startdate)
endidx = years.index(enddate)
midpoint = int((startidx + endidx)/2)
firsthalf = np.zeros(14)
for i in range(startidx,midpoint):
firsthalf = firsthalf + pca[i]
secondhalf = np.zeros(14)
for i in range(midpoint, endidx):
secondhalf = secondhalf + pca[i]
return spatial.distance.cosine(firsthalf, secondhalf)
print(pacechange(1990, 1994, pca))
print(pacechange(2001, 2005, pca))
```
We can also get a mean value for the whole run.
```
thesum = 0
theobservations = 0
for i in range(1960, 2006):
theobservations += 1
thesum += pacechange(i, i+4, pca)
print(thesum / theobservations)
```
## Comparing multiple scales at once
If we wanted to, we could also overplot multiple scales of comparison with different half-widths. Doing this reveals one of the nice things about the "Foote novelty" method, which is that it remains relatively stable as you vary scales of comparison. The same cannot be said, for instance, of changepoint analysis!
In the cell below we've overplotted three-year, four-year, and five-year Foote novelties, highlighting in each case the specific quarters that have two-tailed p values lower than 0.05.
```
plt.axis([1960, 2010, 0, y.max() + 10])
def add_scatter(d, width):
novelties, significance, threshold01, threshold05, threshold95, threshold99 = permute_test(d, width)
novelties, years, significance = zeroless_seq(novelties, getyears(), significance)
yplot = novelties[significance < 0.05]
xplot = years[significance < 0.05]
plt.scatter(xplot, yplot, c = 'red')
plt.plot(years, novelties)
add_scatter(d, 3)
add_scatter(d, 4)
add_scatter(d, 5)
plt.ylabel('Foote novelty')
plt.show()
```
| github_jupyter |
## Read SEG-Y with `obspy`
Before going any further, you might like to know, [What is SEG-Y?](http://www.agilegeoscience.com/blog/2014/3/26/what-is-seg-y.html). See also the articles in [SubSurfWiki](http://www.subsurfwiki.org/wiki/SEG_Y) and [Wikipedia](https://en.wikipedia.org/wiki/SEG_Y).
We'll use the [obspy](https://github.com/obspy/obspy) seismology library to read and write SEGY data.
Technical SEG-Y documentation:
* [SEG-Y Rev 1](http://seg.org/Portals/0/SEG/News%20and%20Resources/Technical%20Standards/seg_y_rev1.pdf)
* [SEG-Y Rev 2 proposal](https://www.dropbox.com/s/txrqsfuwo59fjea/SEG-Y%20Rev%202.0%20Draft%20August%202015.pdf?dl=0) and [draft repo](http://community.seg.org/web/technical-standards-committee/documents/-/document_library/view/6062543)
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
ls -l ../data/*.sgy
```
## 2D data
```
filename = '../data/HUN00-ALT-01_STK.sgy'
from obspy.io.segy.segy import _read_segy
section = _read_segy(filename)
# OPTIONS
# headonly=True — only reads the header info, then you can index in on-the-fly.
# unpack_headers=True — slows you down here and isn't really required.
data = np.vstack([t.data for t in section.traces])
plt.figure(figsize=(16,8))
plt.imshow(data.T, cmap="Greys")
plt.colorbar(shrink=0.5)
plt.show()
section.traces[0]
section.textual_file_header
```
Aargh...
OK, fine, we'll reformat this.
```
def chunk(string, width=80):
try:
# Make sure we don't have a ``bytes`` object.
string = string.decode()
except:
# String is already a string, carry on.
pass
lines = int(np.ceil(len(string) / width))
result = ''
for i in range(lines):
line = string[i*width:i*width+width]
result += line + (width-len(line))*' ' + '\n'
return result
s = section.textual_file_header.decode()
print(chunk(s))
section.traces[0]
t = section.traces[0]
t.npts
t.header
```
## 3D data
Either use the small volume, or **[get the large dataset from Agile's S3 bucket](https://s3.amazonaws.com/agilegeo/Penobscot_0-1000ms.sgy.gz)**
```
#filename = '../data/F3_very_small.sgy'
filename = '../data/Penobscot_0-1000ms.sgy'
from obspy.io.segy.segy import _read_segy
raw = _read_segy(filename)
data = np.vstack([t.data for t in raw.traces])
```
I happen to know that the shape of this dataset is 601 × 481.
```
_, t = data.shape
seismic = data.reshape((601, 481, t))
```
Note that we don't actually need to know the last dimension, if we already have two of the three dimensions. `np.reshape()` can compute it for us on the fly:
```
seismic = data.reshape((601, 481, -1))
```
Plot the result...
```
clip = np.percentile(seismic, 99)
fig = plt.figure(figsize=(12,6))
ax = fig.add_subplot(111)
plt.imshow(seismic[100,:,:].T, cmap="Greys", vmin=-clip, vmax=clip)
plt.colorbar(label="Amplitude", shrink=0.8)
ax.set_xlabel("Trace number")
ax.set_ylabel("Time sample")
plt.show()
```
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2016</p>
</div>
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 텐서 만들기 및 조작
**학습 목표:**
* 텐서플로우 `변수` 초기화 및 할당
* 텐서 만들기 및 조작
* 선형대수학의 덧셈 및 곱셈 지식 되살리기(이 주제가 생소한 경우 행렬 [덧셈](https://en.wikipedia.org/wiki/Matrix_addition) 및 [곱셈](https://en.wikipedia.org/wiki/Matrix_multiplication) 참조)
* 기본 텐서플로우 수학 및 배열 작업에 익숙해지기
```
from __future__ import print_function
import tensorflow as tf
```
## 벡터 덧셈
텐서에서 여러 일반적인 수학 연산을 할 수 있습니다([TF API](https://www.tensorflow.org/api_guides/python/math_ops)). 다음 코드는
각기 정확히 6개 요소를 가지는 두 벡터(1-D 텐서)를 만들고 조작합니다.
```
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
```
### 텐서 형태
형태는 텐서의 크기와 차원 수를 결정하는 데 사용됩니다. 텐서 형태는 `목록`으로 표현하며, `i`번째 요소는 `i` 차원에서 크기를 나타냅니다. 그리고 이 목록의 길이는 텐서의 순위(예: 차원 수)를 나타냅니다.
자세한 정보는 [텐서플로우 문서](https://www.tensorflow.org/programmers_guide/tensors#shape)를 참조하세요.
몇 가지 기본 예:
```
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
```
### 브로드캐스팅
수학에서는 같은 형태의 텐서에서 요소간 연산(예: *add* 및 *equals*)만 실행할 수 있습니다. 하지만 텐서플로우에서는 텐서에서 기존에는 호환되지 않았던 연산을 실행할 수 있습니다. 텐서플로우는 요소간 연산에서 더 작은 배열을 확장하여 더 큰 배열과 같은 형태를 가지게 하는 **브로드캐스팅**(Numpy에서 차용한 개념)을 지원합니다. 예를 들어 브로드캐스팅을 통해 다음과 같은 결과를 얻을 수 있습니다.
* 피연산자에 크기가 `[6]`인 텐서가 필요한 경우 크기가 `[1]` 또는 크기가 `[]`인 텐서가 피연산자가 될 수 있습니다.
* 연산에 크기가 `[4, 6]`인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
* `[1, 6]`
* `[6]`
* `[]`
* 연산에 크기가 `[3, 5, 6]`인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
* `[1, 5, 6]`
* `[3, 1, 6]`
* `[3, 5, 1]`
* `[1, 1, 1]`
* `[5, 6]`
* `[1, 6]`
* `[6]`
* `[1]`
* `[]`
**참고:** 텐서가 브로드캐스팅되면 텐서의 항목은 개념적으로 **복사**됩니다. (성능상의 이유로 실제로 복사되지는 않음. 브로드캐스팅은 성능 최적화를 위해 개발됨.)
전체 브로드캐스팅 규칙 세트는 [Numpy 브로드캐스팅 문서](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html)에 이해하기 쉽게 잘 설명되어 있습니다.
다음 코드는 앞서 설명한 텐서 덧셈을 실행하지만 브로드캐스팅을 사용합니다.
```
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
```
## 행렬 곱셈
선형대수학에서 두 개의 행렬을 곱할 때는 첫 번째 행렬의 *열* 수가 두 번째
행렬의 *행* 수와 같아야 했습니다.
- `3x4` 행렬과 `4x2` 행렬을 곱하는 것은 **_유효합니다_**. 이렇게 하면 `3x2` 행렬을 얻을 수 있습니다.
- `4x2` 행렬과 `3x4` 행렬을 곱하는 것은 **_유효하지 않습니다_**.
```
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
```
## 텐서 형태 변경
텐서 덧셈과 행렬 곱셈에서 각각 피연산자에 제약조건을 부여하면
텐서플로우 프로그래머는 자주 텐서의 형태를 변경해야 합니다.
`tf.reshape` 메서드를 사용하여 텐서의 형태를 변경할 수 있습니다.
예를 들어 8x2 텐서를 2x8 텐서나 4x4 텐서로 형태를 변경할 수 있습니다.
```
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
```
또한 `tf.reshape`를 사용하여 텐서의 차원 수(\'순위\')를 변경할 수도 있습니다.
예를 들어 8x2 텐서를 3-D 2x2x4 텐서나 1-D 16-요소 텐서로 변경할 수 있습니다.
```
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
```
### 실습 #1: 두 개의 텐서를 곱하기 위해 두 텐서의 형태를 변경합니다.
다음 두 벡터는 행렬 곱셈과 호환되지 않습니다.
* `a = tf.constant([5, 3, 2, 7, 1, 4])`
* `b = tf.constant([4, 6, 3])`
이 벡터를 행렬 곱셈에 호환될 수 있는 피연산자로 형태를 변경하세요.
그런 다음 형태가 변경된 텐서에서 행렬 곱셈 작업을 호출하세요.
```
# Write your code for Task 1 here.
```
### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
```
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
```
## 변수, 초기화, 할당
지금까지 수행한 모든 연산은 정적 값(`tf.constant`)에서 실행되었고; `eval()`을 호출하면 항상 같은 결과가 반환되었습니다. 텐서플로우에서는 `변수` 객체를 정의할 수 있으며, 변수 값은 변경할 수 있습니다.
변수를 만들 때 초기 값을 명시적으로 설정하거나 이니셜라이저(예: 분포)를 사용할 수 있습니다.
```
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
```
텐서플로우의 한 가지 특징은 **변수 초기화가 자동으로 실행되지 않는다**는 것입니다. 예를 들어 다음 블록에서는 오류가 발생합니다.
```
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
```
변수를 초기화하는 가장 쉬운 방법은 `global_variables_initializer`를 호출하는 것입니다. `eval()`과 거의 비슷한 `Session.run()`의 사용을 참고하세요.
```
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
```
초기화된 변수는 같은 세션 내에서는 값을 유지합니다. 하지만 새 세션을 시작하면 다시 초기화해야 합니다.
```
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
```
변수 값을 변경하려면 `할당` 작업을 사용합니다. `할당` 작업을 만들기만 하면 실행되는 것은 아닙니다. 초기화와 마찬가지로 할당 작업을 `실행`해야 변수 값이 업데이트됩니다.
```
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
```
로드 및 저장과 같이 여기에서 다루지 않은 변수에 관한 주제도 더 많이 있습니다. 자세히 알아보려면 [텐서플로우 문서](https://www.tensorflow.org/programmers_guide/variables)를 참조하세요.
### 실습 #2: 주사위 2개 10번 굴리기를 시뮬레이션합니다.
주사위 시뮬레이션을 만듭니다. 여기에서 `10x3` 2-D 텐서를 생성하며 조건은 다음과 같습니다.
* 열 `1` 및 `2`는 각각 주사위 1개를 1번 던졌을 때의 값입니다.
* 열 `3`은 같은 줄의 열 `1`과 `2`의 합입니다.
예를 들어 첫 번째 행의 값은 다음과 같을 수 있습니다.
* 열 `1`은 `4`
* 열 `2`는 `3`
* 열 `3`은 `7`
[텐서플로우 문서](https://www.tensorflow.org/api_guides/python/array_ops)를 참조하여 이 문제를 해결해 보세요.
```
# Write your code for Task 2 here.
```
### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
```
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
```
| github_jupyter |
```
import os
import sys
pwd = os.getcwd()
sys.path.append(os.path.join(pwd, '..', '..'))
from server.utils import load_config
from server import db
from sqlalchemy.orm import sessionmaker
import json
import matplotlib.pyplot as plt
conf_dir = os.path.abspath(os.path.join(pwd, '..', '..', 'config', 'base.yaml'))
config = load_config(conf_dir)
engine = db.sync_engine(config['postgres'])
Session = sessionmaker(bind=engine)
session = Session()
from sqlalchemy.orm.exc import NoResultFound
import sqlalchemy as sa
from server.trade.VolumeStrategy import VolumeStrategy
from server.trade.player import START_TIME, END_TIME
rate = 1
table_name = 'demo_order' # 'order', 'demo_order'
table = getattr(db, table_name)
all_orders_flag = 1
start_date_filter = START_TIME #START_TIME - 2017-06-16 00:00
end_date_filter = END_TIME #END_TIME - 2017-06-18 23:59
pair = 'eth_usd' #VolumeStrategy.PAIR
buy_dir, sell_dir = pair.split("_")
cursor = session.query(table).filter(
(table.c.extra[VolumeStrategy.ORDER_CLASS.FLAG_NAME].astext == '1')
& (table.c.pair == pair)
& (table.c.pub_date > start_date_filter)
& (table.c.pub_date < end_date_filter)
).order_by(table.c.pub_date)
if all_orders_flag:
all_orders = session.query(table).filter(
(table.c.pair == pair)
& (table.c.pub_date > start_date_filter)
& (table.c.pub_date < end_date_filter)
).order_by(table.c.pub_date)
if rate:
rate = session.query(db.history).filter(
(db.history.c.pub_date > start_date_filter)
& (db.history.c.pub_date < end_date_filter)
& (db.history.c.pair == pair)
& (sa.sql.func.random() < 0.007)
).order_by(db.history.c.pub_date)
def as_dict(row):
item = row._asdict().copy()
return item
def get_fee(delta):
return (1 + (delta*(0.2 + 0)/100))
def calc_money(order):
if order['is_sell']:
buy_delta = -1
sell_delta = 1
fee_delta = -1
else:
buy_delta = 1
sell_delta = -1
fee_delta = 1
return {
sell_dir: sell_delta * order['amount'] * order['price'] * get_fee(fee_delta),
buy_dir: buy_delta * order['amount']
}
order_dict = {
'sell': [],
'buy': []
}
if all_orders_flag:
for order in all_orders:
order_dict['sell' if order.is_sell else 'buy'].append({
'date': order.pub_date,
'price': order.price
})
print('Orders - {}'.format(len(order_dict['sell']) + len(order_dict['buy'])))
if rate:
rate_dict = {
'date': [],
'sell_price': [],
'buy_price': []
}
for rate_info in rate:
resp = json.loads(rate_info.resp)
rate_dict['date'].append(rate_info.pub_date)
rate_dict['buy_price'].append(resp['asks'][0][0])
rate_dict['sell_price'].append(resp['bids'][0][0])
print('Prices - {}'.format(len(rate_dict['date'])))
plt.cla()
plt.clf()
plt.close('all')
fig_price = plt.figure(figsize=(20,10))
ay = fig_price.add_subplot(1,1,1)
ay.grid(True)
for name, vals in order_dict.items():
ay.plot(
list(map(lambda i: i['date'], vals)),
list(map(lambda i: i['price'], vals)),
color = 'red' if name == 'sell' else 'blue',
marker= '.'
)
if rate:
ay.plot(rate_dict['date'], rate_dict['buy_price'], alpha=0.2, color='blue')
ay.plot(rate_dict['date'], rate_dict['sell_price'], alpha=0.2, color='red')
fig_price
change_info = []
for index, order in enumerate(all_orders):
order = as_dict(order)
money_change = calc_money(order)
if not index:
change_info.append({
buy_dir: money_change[buy_dir],
sell_dir: money_change[sell_dir],
'price': order['price'],
'date': order['pub_date'],
'sum': money_change[sell_dir] + (money_change[buy_dir] * order['price'])
})
last = change_info[index]
else:
last = change_info[index-1]
change_info.append({
buy_dir: last[buy_dir] + money_change[buy_dir],
sell_dir: last[sell_dir] + money_change[sell_dir],
'price': order['price'],
'date': order['pub_date'],
'sum': money_change[sell_dir] + (money_change[buy_dir] * order['price'])
})
'''
print('{}, id {}, parent {} sum {}'.format(
money_change,
order['id'],
order['extra'].get('parent'),
last['']
))
'''
last = change_info[len(change_info)-1]
print('Total: {} {} {} {} sum {}'.format(
last[sell_dir], sell_dir, last[buy_dir], buy_dir, last[sell_dir]+(last[buy_dir]*last['price']))
)
index
plt.cla()
plt.clf()
fig = plt.figure(figsize=(20,10))
ay = fig.add_subplot(1,1,1)
ay.grid(True)
ay.plot(
list(map(lambda i: i['date'], change_info)),
list(map(lambda i: i[sell_dir]+(i[buy_dir]*i['price']), change_info)),
'r-', linewidth=1
)
fig
level = set()
def get_parent(item):
parent = session.query(table).filter(
(table.c.id == str(item['extra']['parent'])
#| (table.c.extra['merged'].astext == str(item.id))
)
& (table.c.pub_date > start_date_filter)
& (table.c.pub_date < end_date_filter)).one()
return as_dict(parent)
def iter_order(item, tail):
if item['extra'].get('parent'):
parent = get_parent(item)
iter_order(parent, tail)
tail.append([parent, item])
else:
tail.append([None, item])
pairs_list = []
for index, order in enumerate(cursor):
order = as_dict(order)
iter_order(order, pairs_list)
len(pairs_list)
new_sum = 0
for parent, child in pairs_list:
if not parent:
continue
parent_dir = 'Sell' if parent['is_sell'] else 'Buy'
child_dir = 'Sell' if child['is_sell'] else 'Buy'
print('{} before price {} amount {} now {} price {} amount {}'.format(
parent_dir,
parent['price'],
parent['amount'],
child_dir,
child['price'],
child['amount']
))
if parent['is_sell']:
price_diff = parent['price'] - child['price']
else:
price_diff = child['price'] - parent['price']
new_sum += price_diff * child['amount']
new_sum
```
| github_jupyter |
# Classification, and scaling up our networks
In this guide we introduce neural networks for classification, and using Keras, we will begin to try solving problems and datasets that are much bigger than the toy datasets like Iris we have used so far. In this notebook, we will define classification and introduce several innovations we have to make in order to do it correctly, and we will also use two large standard datasets which have been used by machine learning scientists for many years: MNIST and CIFAR-10.
### Classification
Classification is a task in which all data points are assigned some discrete category. For regression of a single output variable, we have a single output neuron, as we have seen in previous notebooks. For doing multi-class classification, we instead have an output neuron for each of the possible classes, and say that the predicted output is the one corresponding to the neuron which has the highest output value. For example, given the task of classifying images of handwritten digits (which we will introduce later), we might build a neural network like the following, having 10 output neurons for each of the 10 digits.

Before trying out neural networks for classification, we have to introduce two new concepts: the softmax function, and cross-entropy loss.
### Softmax activation
So far, we have learned about one activation function, that of the sigmoid function. We saw that we use this typically in all the layers of a neural network except the output layer, which is usually left linear (without an activation function) for the task of regression. But in classification, it is very typical to output the final layer outputs through the [softmax activation function](https://en.wikipedia.org/wiki/Softmax_function). Given a final layer output vector $z$, the softmax function is defined as the following:
$$\sigma (\mathbf {z} )_{i}={\frac {e^{z_{i}}}{\sum _{i}e^{z_{i}}}}$$
Where the denominator $\sum _{i=1}e^{z_{i}}$ is the sum over all the classes. Softmax squashes the output $z$, which is unbounded, to values between 0 and 1, and dividing it by the sum over all the classes means that the output sums to 1. This means we can interpret the output as class probabilities.
We will use the softmax activation for the classification output layer from here on out. A short example follows:
```
import matplotlib.pyplot as plt
import numpy as np
def softmax(Z):
Z = [np.exp(z) for z in Z]
S = np.sum(Z)
Z /= S
return Z
Z = [0.0, 2.3, 1.0, 0, 5.3, 0.0]
y = softmax(Z)
print("Z =", Z)
print("y =", y)
```
We were given a length-6 vector $Z$ containing the values $[0.0, 2.3, 1.0, 0, 8.3, 0.0]$. We ran it through the softmax function, and we plot it below:
```
plt.bar(range(len(y)), y)
```
Notice that because of the exponential nature of $e$, the 5th value in $Z$, 5.3, has an over 90% probability. Softmax tends to exaggerate the differences in the original output.
### Categorical cross-entropy loss
We introduced loss functions in the last guide, and we used the simple mean-squared error (MSE) function to evaluate the performance of our network. While MSE works nicely for regression, and can work for classification as well, it is generally not preferred for classification, because class variables are not naturally continuous and therefore, the MSE error, being a continuous value is not exactly relevant or "natural." Instead, what scientists generally prefer for classification is [categorical cross-entropy loss](https://en.wikipedia.org/wiki/Cross_entropy).
A discussion or derivation of cross-entropy loss is beyond the scope of this class but a good introduction to it can be [found here](https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/). A discussion of what makes it superior to MSE for classification can be found [here](https://jamesmccaffrey.wordpress.com/2013/11/05/why-you-should-use-cross-entropy-error-instead-of-classification-error-or-mean-squared-error-for-neural-network-classifier-training/). We will just focus on its properties instead.
Letting $y_i$ denote the ground truth value of class $i$, and $\hat{y}_i$ be our prediction of class $i$, the cross-entropy loss is defined as:
$$ H(y, \hat{y}) = -\sum_{i} y_i \log \hat{y}_i $$
If the number of classes is 2, we can expand this:
$$ H(y, \hat{y}) = -{(y\log(\hat{y}) + (1 - y)\log(1 - \hat{y}))}\ $$
Notice that as our probability for the predicting the correct class approaches 1, the cross-entropy approaches 0. For example, if $y=1$, then as $\hat{y}\rightarrow 1$, $H(y, \hat{y}) \rightarrow 0$. If our probability for the correct class approaches 0 (the exact wrong prediction), e.g. if $y=1$ and $\hat{y} \rightarrow 0$, then $H(y, \hat{y}) \rightarrow \infty$.
This is true in the more general $M$-class cross-entropy loss as well, $ H(y, \hat{y}) = -\sum_{i} y_i \log \hat{y}_i $, where if our prediction is very close to the true label, then the entropy loss is close to 0, whereas the more dissimilar the prediction is to the true class, the higher it is.
Minor note: in practice, a very small $\epsilon$ is added to the log, e.g. $\log(\hat{y}+\epsilon)$ to avoid $\log 0$ which is undefined.
### MNIST: the "hello world" of classification
In the last guide, we introduced [Keras](https://www.keras.io). We will now use it to solve a classification problem, that of MNIST. First, let's import Keras and the other python libraries we will need.
```
import os
import matplotlib.pyplot as plt
import numpy as np
import random
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.layers import Activation
```
We are also now going to scale up our setup by using a much more complicated dataset than Iris, that of the [MNIST](http://yann.lecun.com/exdb/mnist/), a dataset of 70,000 28x28 pixel grayscale images of handwritten numbers, manually classified into the 10 digits, and split into a canonical training set and test set. We can load MNIST with the following code:
```
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_classes = 10
```
Let's see what the data is packaged like:
```
print('%d train samples, %d test samples'% (x_train.shape[0], x_test.shape[0]))
print("training data shape: ", x_train.shape, y_train.shape)
print("test data shape: ", x_test.shape, y_test.shape)
```
Let's look at some samples of the images.
```
samples = np.concatenate([np.concatenate([x_train[i] for i in [int(random.random() * len(x_train)) for i in range(16)]], axis=1) for i in range(4)], axis=0)
plt.figure(figsize=(16,4))
plt.imshow(samples, cmap='gray')
```
As before, we need to pre-process the data for Keras. To do so, we will reshape the image arrays from $n$x28x28 to $n$x784, so each row of the data is the full "unrolled" list of pixels, and we will ensure they are float32 for precision. We then normalize the pixel values (which are naturally between 0 and 255) so that they are all between 0 and 1 instead.
```
# reshape to input vectors
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
# make float32
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# normalize to (0-1)
x_train /= 255
x_test /= 255
```
In classification, we will eventually structure our neural networks so that they have $n$ output neurons, 1 for each class. The idea is whichever output neuron has the highest value at the end is the predicted class. For this, we must structure our labels as "one-hot" vectors, which are vectors of length $n$ where $n$ is the number of classes, and the elements are all 0 except for the correct label, which is 1. For example, an image of the number 3 would be:
$[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]$
And for the number 7 it would be:
$[0, 0, 0, 0, 0, 0, 0, 1, 0, 0]$
Notice we are zero-indexed again, so the first element is for the digit 0.
```
print("first sample of y_train before one-hot vectorization", y_train[0])
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print("first sample of y_train after one-hot vectorization", y_train[0])
```
Now let's make a neural network for MNIST. We'll give it two layers of 100 neurons each, with sigmoid activations. Then we will make the output layer go through a softmax activation, the standard for classification.
```
model = Sequential()
model.add(Dense(100, activation='sigmoid', input_dim=784))
model.add(Dense(100, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
```
Thus, the network has 784 * 100 = 78,400 weights in the first layer, 100 * 100 = 10,000 weights in the second layer, and 100 * 10 = 1,000 weights in the output layer, plus 100 + 100 + 10 = 210 biases, giving us a total of 78,400 + 10,000 + 1,000 + 210 = 89,610 parameters. We can see this in the summary.
```
model.summary()
```
We will compile our model to optimize for the categorical cross-entropy loss as described earlier, and we will use SGD as our optimizer again. We will also include the optional argument `metrics` to keep track of the accuracy during training, in addition to just the loss. The accuracy is the % of samples classified correctly.
```
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
```
We now train our network for 20 epochs and use a batch size of 100. We will talk in more detail later on how to choose these hyper-parameters. We use our validation set to evaluate our performance.
```
model.fit(x_train, y_train,
batch_size=100,
epochs=20,
verbose=1,
validation_data=(x_test, y_test))
```
Evaluate the performance of the network.
```
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
After 20 epochs, we have an accuracy of around 87%. Perhaps we can train it for a bit longer to get better performance? Let's run fit again for another 20 epochs. Notice that as long as we don't recompile the model, we can keep running fit to try to improve the model. So we know that we don't necessarily have to decide ahead of time how long to train for, we can keep training as we see fit.
```
model.fit(x_train, y_train,
batch_size=100,
epochs=20,
verbose=1,
validation_data=(x_test, y_test))
```
At this point our accuracy is at 90%. This seems not too bad! Random guesses would only get us 10% accuracy, so we must be doing something right. But 90% is not acceptable for MNIST. The current record for MNIST has 99.8% accuracy, which means our model makes 500 times as many errors as the best network.
So how can we improve it? What if we make the network bigger? And train for longer? Let's give it two layers of 256 neurons each, and then train for 60 epochs.
```
model = Sequential()
model.add(Dense(256, activation='sigmoid', input_dim=784))
model.add(Dense(256, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.summary()
```
The network now has 269,322 parameters, which is more than 3 times as many as the last network. Now train it.
```
model.fit(x_train, y_train,
batch_size=100,
epochs=60,
verbose=1,
validation_data=(x_test, y_test))
```
Surprisingly, this new network only achieves 91.6% accuracy, which is only a bit better than the last one.
So maybe bigger is not better! The problem is that just making the network bigger has diminishing improvements for us. We are going to need to make more improvements to get good results. We will introduce some improvements in the next notebook.
Before we do that,let's try what we have so far with [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html). CIFAR-10 is a dataset which contains 60,000 32x32x3 RGB-color images of airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships, and trucks.
The next cell will import that dataset, and tell us about it's shape.
```
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
num_classes = 10
print('%d train samples, %d test samples'%(x_train.shape[0], x_test.shape[0]))
print("training data shape: ", x_train.shape, y_train.shape)
print("test data shape: ", x_test.shape, y_test.shape)
```
Let's look at a random sample of images from CIFAR-10.
```
samples = np.concatenate([np.concatenate([x_train[i] for i in [int(random.random() * len(x_train)) for i in range(16)]], axis=1) for i in range(6)], axis=0)
plt.figure(figsize=(16,6))
plt.imshow(samples, cmap='gray')
```
As with MNIST, we need to pre-process the data by converting to float32 precision, reshaping so each row is a single input vector, and normalizing between 0 and 1.
```
# reshape to input vectors
x_train = x_train.reshape(50000, 32*32*3)
x_test = x_test.reshape(10000, 32*32*3)
# make float32
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# normalize to (0-1)
x_train /= 255
x_test /= 255
```
Convert labels to one-hot vectors.
```
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
```
Let's copy the last network we used for MNIST, and see how this architecture does for CIFAR-10. Note that the `input_dim` of the first layer is no longer 784 as it was for MNIST, but now it is 32x32x3=3072. This means we will have more parameters in this network than the MNIST network of the same architecture.
```
model = Sequential()
model.add(Dense(100, activation='sigmoid', input_dim=3072))
model.add(Dense(100, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
This network now has 318,410 parameters, compared to 269,322 as we had in the equivalent MNIST network. Let's compile it to learn with SGD and the same categorical cross-entropy loss function.
```
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
```
Now train for 60 epochs, same batch size.
```
model.fit(x_train, y_train,
batch_size=100,
epochs=60,
verbose=1,
validation_data=(x_test, y_test))
```
After 60 epochs, our network only has an accuracy of 40%. Still better than random guesses (10%) but 40% is terrible. The current record for CIFAR-10 accuracy is 97%. So we have a long way to go!
In the next notebook, we will introduce convolutional neural networks, which will greatly improve our performance.
| github_jupyter |
```
import sympy
from sympy import I, conjugate, pi, oo
from sympy.functions import exp
from sympy.matrices import Identity, MatrixSymbol
import numpy as np
```
# \#3
## Fourier matrix with $\omega$
```
ω = sympy.Symbol("omega")
def omegaPow(idx):
return ω**(idx)
def fourierGenerator(N):
return sympy.Matrix(N, N, lambda m, n: omegaPow(m * n))
assert fourierGenerator(2) == sympy.Matrix([[1, 1], [1, ω]])
fourierGenerator(4)
fourierGenerator(4)
```
## Complex conjugate F
```
conjugate(fourierGenerator(4))
```
## Substitute $\omega=e^\frac{i2{\pi}mn}{N}$
```
def omegaSubstirution(idx, N):
return sympy.exp((I * 2 * pi * idx) / N)
assert omegaSubstirution(1, 4) == I
assert omegaSubstirution(8, 8) == 1
assert omegaSubstirution(3, 6) == -1
```
## Generate Fourier matrix with values
```
def fourierGeneratorWithExp(N):
return sympy.Matrix(N, N, lambda m, n: omegaSubstirution(m * n, N))
assert fourierGeneratorWithExp(2) == sympy.Matrix([[1, 1], [1, -1]])
F4 = fourierGeneratorWithExp(4)
F4
```
## Matrix conjugate
```
F4Conj = conjugate(F4)
assert Identity(4).as_explicit() == (1 / 4) * F4 * F4Conj
F4Conj
```
## Conjugate generator with $\omega$
```
def fourierConjGenerator(N):
return sympy.Matrix(N, N, lambda m, n: omegaPow(0 if m == 0 else (N - m) * n))
fourierConjGenerator(4)
```
## Conjugate generator with values
```
def fourierConjGeneratorWithExp(N):
return sympy.Matrix(
N, N, lambda m, n: omegaSubstirution(0 if m == 0 else (N - m) * n, N))
F4ConjWithExp = fourierConjGeneratorWithExp(4)
assert F4Conj == F4ConjWithExp
F4ConjWithExp
```
## Permutation Generator
```
def generatePermutationMatrix(N):
return np.vstack((np.hstack((np.array([1]), np.zeros(
N - 1, dtype=int))), np.zeros((N - 1, N), dtype=int))) + np.fliplr(
np.diagflat(np.ones(N - 1, dtype=int), -1))
assert np.all(
generatePermutationMatrix(4) == np.array([[1, 0, 0, 0], [0, 0, 0, 1],
[0, 0, 1, 0], [0, 1, 0, 0]]))
generatePermutationMatrix(4)
```
## $$F=P{\cdot}\overline{F}$$
```
P4 = generatePermutationMatrix(4)
assert F4 == P4 * F4Conj
P4 * F4Conj
```
## $$P^2 = I$$
```
assert np.all(np.linalg.matrix_power(P4, 2) == np.identity(4, dtype=int))
```
## $$\frac{1}{N}{\cdot}F^2 = P$$
```
assert np.all(((1 / 4) * np.linalg.matrix_power(F4, 2)).astype(int) == P4)
```
# \#4
```
ID = sympy.Matrix(
np.hstack((np.vstack((np.identity(2, dtype=int), np.identity(2,
dtype=int))),
np.vstack((np.identity(2, dtype=int) * np.diag(F4)[0:2],
-np.identity(2, dtype=int) * np.diag(F4)[0:2])))))
ID
ID.inv()
F2 = fourierGeneratorWithExp(2)
Zeros2 = np.zeros((2, 2), dtype=int)
F22 = sympy.Matrix(
np.array(np.vstack((np.hstack((F2, Zeros2)), np.hstack((Zeros2, F2))))))
F22
F22.inv()
EvenOddP = sympy.Matrix([[1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0],
[0, 0, 0, 1]])
EvenOddP
assert ID * F22 * EvenOddP == F4
ID * F22 * EvenOddP
F4
```
# \#5
$$
F^{T}_{N} =
\left(
\begin{bmatrix}
I_{N/2} & D_{N/2} \\[0.3em]
I_{N/2} & -D_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
F_{N/2} & \\[0.3em]
& F_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
even_{N/2} & even_{N/2} \\[0.3em]
odd_{N/2} & odd_{N/2} \\[0.3em]
\end{bmatrix}
\right)^T
=
\begin{bmatrix}
even_{N/2} & odd_{N/2} \\[0.3em]
even_{N/2} & odd_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
F_{N/2} & \\[0.3em]
& F_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
I_{N/2} & I_{N/2} \\[0.3em]
D_{N/2} & -D_{N/2} \\[0.3em]
\end{bmatrix}
$$
```
EvenOddP * F22 * sympy.Matrix.transpose(ID)
EvenOddP * F22 * sympy.Matrix.transpose(ID) == F4
```
# \#6
### Based on Permutation Matrix for the Conjugate Matrix - even / odd Permutation
```
N = 6
P = generatePermutationMatrix(N)
EvenOddP = sympy.Matrix(
np.vstack((
P[0],
P[np.arange(N-2, 1, -2)],
P[np.arange(N-1, 0, -2)]
))
)
EvenOddP
def generatePermutationFourierMatrix(N):
P = generatePermutationMatrix(N)
return sympy.Matrix(
np.vstack((
P[0],
P[np.arange(N-2, 1, -2)],
P[np.arange(N-1, 0, -2)]
))
)
assert np.all(generatePermutationFourierMatrix(4) == sympy.Matrix([
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]
]))
```
### Shape of the Permutation Matrix
$$
\begin{bmatrix}
even_{N/2} & even_{N/2} \\[0.3em]
odd_{N/2} & odd_{N/2} \\[0.3em]
\end{bmatrix}
$$
### N = 8 Fourier Permutation Matrix
```
generatePermutationFourierMatrix(8)
generatePermutationFourierMatrix(6)
```
## Generate Fourier Matrices
### Generate Fourier Blocks for basic Matrix:
#### Half Eye: $I_{N/2}$
#### Half Fourier Diagonal: $D_{N/2}$
#### Half Fourier: $F_{N/2}$
```
def generateFourierBlocks(N, fourierGenerator):
half = int(N/2)
quarterEye = sympy.Matrix.eye(half, half)
quarterZeroes = sympy.Matrix.zeros(half, half)
halfDiagFourier = sympy.Matrix.diag(
np.diagonal(fourierGenerator(N))
)[:half, :half]
halfFourier = fourierGenerator(half)
return (quarterEye, quarterZeroes, halfDiagFourier, halfFourier)
Blocks4 = generateFourierBlocks(4, fourierGenerator)
assert Blocks4 == (
sympy.Matrix([
[1, 0],
[0, 1]
]),
sympy.Matrix([
[0, 0],
[0, 0]
]),
sympy.Matrix([
[1, 0],
[0, ω]
]),
sympy.Matrix([
[1, 1],
[1, ω]
])
)
Blocks4Complex = generateFourierBlocks(4, fourierGeneratorWithExp)
assert Blocks4Complex == (
sympy.Matrix([
[1, 0],
[0, 1]
]),
sympy.Matrix([
[0, 0],
[0, 0]
]),
sympy.Matrix([
[1, 0],
[0, I]
]),
sympy.Matrix([
[1, 1],
[1, -1]
])
)
def composeFourierMatricesFromBlocks(N, quarterEye, quarterZeroes, halfDiagFourier, halfFourier):
return (
sympy.Matrix.vstack(
sympy.Matrix.hstack(
quarterEye,
halfDiagFourier
),
sympy.Matrix.hstack(
quarterEye,
-halfDiagFourier
)
),
sympy.Matrix.vstack(
sympy.Matrix.hstack(
halfFourier,
quarterZeroes
),
sympy.Matrix.hstack(
quarterZeroes,
halfFourier
)
),
generatePermutationFourierMatrix(N)
)
def createFourierMatrices(N, fourierGenerator):
(quarterEye, quarterZeroes, halfDiagFourier, halfFourier) = generateFourierBlocks(N, fourierGenerator)
return composeFourierMatricesFromBlocks(N, quarterEye, quarterZeroes, halfDiagFourier, halfFourier)
```
## Generate Fourier Matrices
### IdentityAndDiagonal: $
\begin{bmatrix}
I_{N/2} & D_{N/2} \\[0.3em]
I_{N/2} & -D_{N/2} \\[0.3em]
\end{bmatrix}
$
### FourierHalfNSize: $
\begin{bmatrix}
F_{N/2} & \\[0.3em]
& F_{N/2} \\[0.3em]
\end{bmatrix}
$
### EvenOddPermutation: $
\begin{bmatrix}
even_{N/2} & even_{N/2} \\[0.3em]
odd_{N/2} & odd_{N/2} \\[0.3em]
\end{bmatrix}
$
### Full picture
$$
F_{N}=\begin{bmatrix}
I_{N/2} & D_{N/2} \\[0.3em]
I_{N/2} & -D_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
F_{N/2} & \\[0.3em]
& F_{N/2} \\[0.3em]
\end{bmatrix}
\cdot
\begin{bmatrix}
even_{N/2} & even_{N/2} \\[0.3em]
odd_{N/2} & odd_{N/2} \\[0.3em]
\end{bmatrix}
$$
```
def generateFourierMatricesWithOmega(N):
return createFourierMatrices(N, fourierGenerator)
IdentityAndDiagonal, FHalfNSize, EvenOddPermutation = generateFourierMatricesWithOmega(4)
assert sympy.Matrix([
[1, 0, 1, 0],
[0, 1, 0, ω],
[1, 0, -1, 0],
[0, 1, 0, -ω],
]) == IdentityAndDiagonal
IdentityAndDiagonal
assert sympy.Matrix([
[1, 1, 0, 0],
[1, ω, 0, 0],
[0, 0, 1, 1],
[0, 0, 1, ω]
]) == FHalfNSize
FHalfNSize
FHalfNSize = sympy
assert sympy.Matrix([
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]
]) == EvenOddPermutation
EvenOddPermutation
def generateFourierMatricesWithExp(N):
return createFourierMatrices(N, fourierGeneratorWithExp)
IdentityAndDiagonal, FourierHalfNSize, EvenOddPermutation = generateFourierMatricesWithExp(4)
assert sympy.Matrix([
[1, 0, 1, 0],
[0, 1, 0, I],
[1, 0, -1, 0],
[0, 1, 0, -I]
]) == IdentityAndDiagonal
IdentityAndDiagonal
assert sympy.Matrix([
[1, 1, 0, 0],
[1, -1, 0, 0],
[0, 0, 1, 1],
[0, 0, 1, -1]
]) == FourierHalfNSize
FourierHalfNSize
assert sympy.Matrix([
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]
]) == EvenOddPermutation
EvenOddPermutation
```
## Solution for \#6
```
IdentityAndDiagonal = sympy.Matrix([
[1, 0, 0, 1, 0, 0],
[0, 1, 0, 0, ω, 0],
[0, 0, 1, 0, 0, ω**2],
[1, 0, 0, -1, 0, 0],
[0, 1, 0, 0, -ω, 0],
[0, 0, 1, 0, 0, -ω**2],
])
IdentityAndDiagonal
FourierHalfNSize = sympy.Matrix([
[1, 1, 1, 0, 0, 0],
[1, ω**2, ω**4, 0, 0, 0],
[1, ω**4, ω**2, 0, 0, 0],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, ω**2, ω**4],
[0, 0, 0, 1, ω**4, ω**2],
])
FourierHalfNSize
EvenOddPermutation
FourierBasedOnMatrices = IdentityAndDiagonal * FourierHalfNSize * EvenOddPermutation
FourierBasedOnMatrices
# assert FourierBasedOnMatrices == fourierGeneratorWithExp(N)
fourierGenerator(N)
N = 4
sympy.Matrix(
sympy.fft(sympy.Matrix.eye(2, 2), 1)
)
N = 16
half = int(N/2)
sympy.fft(sympy.Matrix.eye(2), 1)
sympy.Matrix.eye(half)
```
| github_jupyter |
```
model_hyperparam_search(3)
def model_hyperparam_search(layers, activation_functions=['tanh', 'softmax', 'relu']):
iterations = len(activation_functions)**layers
af_combs = make_pairwise_list(max_depth=layers, options=activation_functions)
print(f'{layers}\t{activation_functions}\t{iterations} iterations required')
for iteration in range(iterations):
print(f"running interation {iteration}")
print("create input layer")
for layer in range(layers):
print(f"create hidden layer {layer} of type {af_combs[iteration][layer]}")
print("create output layer")
print("")
def layer_search(hidden_layers=[1,3,5],
activation_functions=None):
for layer_count in hidden_layers:
if not activation_functions:
model_hyperparam_search(layer_count)
else:
model_hyperparam_search(layer_count, activation_functions)
layer_search()
def make_pairwise_list(max_depth=2, options=['tanh', 'softmax', 'relu']):
combinations = []
for i in range(len(options)**max_depth):
state = []
for depth in range(max_depth):
if depth == 0:
#print(f"{i:4}: {options[i // len(options)**(max_depth-1)]}", end=' ')
state.append(options[i // len(options)**(max_depth-1)])
elif depth == max_depth - 1:
#print(f"{options[i % len(options)]}", end=' ')
state.append(options[i % len(options)])
else:
#print(f"{options[i // len(options)**(depth) % len(options)]}", end=' ')
state.append(options[i // len(options)**(depth) % len(options)])
#print("")
combinations.append(state)
return combinations
options = make_combo()
combinations = make_pairwise_list(max_depth=5,options=options)
combinations[:100]
A = [[11,12],[21,22],[31,32]]
len(A[1])
for a in A:
print(a)
def make_combo(option1=['tanh','relu','linear'],option2=['0.1','0.01','0.001']):
parameter_combo = []
for i in option1:
for j in option2:
combo = []
combo.append(i)
combo.append(j)
parameter_combo.append(combo)
return parameter_combo
make_combo()
3**5
l = []
l.append(3)
l.append('foo')
l[3]
for i in range(8):
print(i)
def do_not_use_example(max_depth=2, options=['tanh', 'softmax', 'relu']):
state = []
for i in range(max_depth):
state.append(0) # first entry in options
for i in range(len(options)**max_depth):
print(f"{i}")
#for depth in range(max_depth):
#print(f"{depth} : {options[i % (max_depth+1) * depth]}")
#print(f"{depth} : {options[i % len(options)**(depth+1)]}")
depth = 1
print(f" {i // len(options)**(max_depth-1)}")
print(f" {i // len(options)**(depth) % len(options)}")
print(f" {i % len(options)}")
def make_pairwise_list_old(max_depth=2, options=['tanh', 'softmax', 'relu']):
state = []
for i in range(max_depth):
state.append(0) # first entry in options
for i in range(len(options)**max_depth):
for depth in range(max_depth):
if depth == 0:
print(f"{i:4}: {i // len(options)**(max_depth-1)}", end=' ')
elif depth == max_depth - 1:
print(f"{i % len(options)}", end=' ')
else:
print(f"{i // len(options)**(depth) % len(options)}", end=' ')
print("")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Dmitri9149/Transformer_From_Scratch/blob/main/Final_Working_Transformer_MXNet_v6_76800_128_10_24_10_20.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install -U mxnet-cu101==1.7.0
!pip install d2l==0.14.4
### !pip install ipython-autotime
### %load_ext autotime
import math
from d2l import mxnet as d2l
from mxnet import np, npx
from mxnet.gluon import nn
from mxnet import np, npx, init, gluon, autograd
import collections
import os
import time
npx.set_np()
from mxnet import autograd, np, npx
```
The code for Transformer from scratch is collected here. The code is mostly from http://d2l.ai/chapter_attention-mechanisms/transformer.html . I did many comments to the code at most difficult points. I hope my additional code and comments will help in better understanding of the Transformer.
This is the original article for the Transformer :
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems (pp. 5998–6008).
The future work:
1. To learn the Transformer on big data set.
2. Transation from (to) English to Finnish language.
3. Modify the architecture of the Transformer.
4. Better tokenization and preprocessing.
### Attention Mechanism
#### Masked softmax
This is importand auxiliary function.
""" The masked softmax takes a 3-dimensional input and enables us to filter out some elements by specifying a valid length for the last dimension.... As a result, any value outside the valid length will be masked as 0.""" (citation from d2l.ai).
The notion of valid length come from the need to add special <pad> token if our sentence is shorter than length we use for all sentencies in batches. The <pad> tokens will not participate in prediction.
My comments are started with ### ,
the comments with one # are from the original d2l.ai code.
Some functions for plotting and downloading of specific files from specific places are still taken from the d2l.ai library on GitHub : https://github.com/d2l-ai/d2l-en/blob/master/d2l/mxnet.py But the biggest par of the code is collected here (and commented).
```
### from d2l.ai
def masked_softmax(X, valid_len):
"""Perform softmax by filtering out some elements."""
# X: 3-D tensor, valid_len: 1-D or 2-D tensor
### why 3-D tensor ?
### first dimention; we will quantify samples within batch,
### so, the first dimention determines the number of samples in the batch
###
### second dimention; we will quantify queries,
### we may have several queries,
### the second dimention determines the number of queries
###
### we may set up the valid lengths same for every sample in the
### batch, i.e 1-D valid-lengh with size (batch_size, )
### the same means : independent of the queries
### On the contarry: we may set up valid lengths individually for every
### sample in a batch and for every query,
### in this case it will be 2-D valid length
### with size (batch size, number of queries)
###
### Third parameter will correspond to the number of key/value pairs
###
### We may need the valid_length when: 1. we <pad> the end of a sentence: it is too
### short, shorter than num_steps ; 2. when we use the valid_lenght in decoder
### via training, and every word in target sentence is used as query: the query
### can (or may ?) see the all words to the left, but not to the right (see the
### encoder decoder code below). To handle the case we use valid_length too.
###
if valid_len is None:
return npx.softmax(X)
else:
shape = X.shape
if valid_len.ndim == 1:
valid_len = valid_len.repeat(shape[1], axis=0)
else:
valid_len = valid_len.reshape(-1)
# Fill masked elements with a large negative, whose exp is 0
X = npx.sequence_mask(X.reshape(-1, shape[-1]), valid_len, True,
axis=1, value=-1e6)
return npx.softmax(X).reshape(shape)
### from d2l.ai
masked_softmax(np.random.uniform(size=(2, 2, 4)), np.array([2, 3]))
### 2 - number of samples in the batch
### 2 - we make deal with 2 queries
### 4 - four key/value pairs
### for the first sample in our batch , from 4 pairs we will take
### into account only results from first 2 pairs, the rest will be multiplied by 0,
### because that pairs correspond to <pad> tokens
### for the second sample (4 key/value pairs) we will take into account
### only results for first 3 key/value pairs (the rest will masked with 0,
### because the rest pairs correspond to <pad> tokens)
### this is the meaning of np.array([2,3]) as valid length
### the velid length is not dependent from queries in this case
### from d2l.ai
npx.batch_dot(np.ones((2, 1, 3)), np.ones((2, 3, 2)))
### one more example with 1-D valid length
valid_length = np.array([2,3])
### the shape is (2,) : one dimentional length
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform (size =(2, 3, 5)), valid_length )
### if we declare 2-D valid_length
valid_length = np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform(size = (4, 3, 5)), valid_length)
### Let us consider the first sample in our batch
### [[0.21225105, 0.31475353, 0.4729953 , 0. , 0. ,
### 0. ],
### [0.19417836, 0.20596693, 0.16711308, 0.15453914, 0.27820238,
### 0. ],
### [0.2753876 , 0.21671425, 0.30811197, 0.19978616, 0. ,
### 0. ]],
### from third dimention in np.random.uniform(size = (4, 3, 5)) we may see it correspond to
### 5 key/value pairs (that is why the length of the lines is 5)
### second dimention in np.random.uniform(size = (4, 3, 5)) means the results are obtained from
### 3 queries, that is why there are 3 lines corresponding to every batch
###
### Below we may see there are 4 groups, because the first dimention, the
### number of samples, is 4 (batch size)
###
### np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
### is 2-D array (of size 4 * 3 in our case))
### 4 is batch size, 3 is number of queries : we have 4 groups with 3 lines in each;
### the [3,5,4] subarray correspond to the first sample in the batch,
### in the first group : the first line has first 3 non zero elements,
### the second line 5 first non zero and third line 4 first non zero elements.
```
### Dot product attention
#### Why we need it, how it is calculated
We have query with dimension `d`.
We have #kv_pairs: key/value pairs. Every key and value are vectors of dimension `d`. We pass the query trought the 'grid' with the leng of #kv_pairs and get #kv_pairs of scores. How it works within the pass: we make dot product of query with every of #kv_pairs keys in the 'grid' and get a #kv_pairs scores. We also normilize the scores by dividing on $\sqrt{d}$.
If we have batch with size batch_size and number of queries = #queries, we will get tensor of scores of size (batch_size, #queries, #kv_pairs).
In this way we receive the attention_weights tensor.
We also have tensor 'value' of values of size (batch_size, #kv_pairs, dim_v).
Finally, using npx.batch_dot(attention_weights, value) we will get tensor of size (batch_size, #queries, dim_v) which corresponf of the 'passing' our queries throught the 'grid' of key/value pairs: for every query, for every sample in the batch we will get the transformed vector of size dim_v.
```
### from d2l.ai book
class DotProductAttention(nn.Block):
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# `query`: (`batch_size`, #queries, `d`)
# `key`: (`batch_size`, #kv_pairs, `d`)
# `value`: (`batch_size`, #kv_pairs, `dim_v`)
# `valid_len`: either (`batch_size`, ) or (`batch_size`, xx)
def forward(self, query, key, value, valid_len=None):
d = query.shape[-1]
# Set transpose_b=True to swap the last two dimensions of key
scores = npx.batch_dot(query, key, transpose_b=True) / math.sqrt(d)
attention_weights = self.dropout(masked_softmax(scores, valid_len))
return npx.batch_dot(attention_weights, value)
if False:
### the code from d2l.ai
atten = DotProductAttention(dropout=0.5)
atten.initialize()
### batch size of 2, #kv_pairs = 10, every key is vector of size 2 with
### ones : (1.,1.)
keys = np.ones((2, 10, 2))
### we start with vector which keep float numbers from 0 to 39;
### reshape it to tensor which model one sample batch with 10 key/value pairs and
### dimension of values dim_v = 4; finally we repeat the construction to get 2
### similar samples (batch with 2 samples).
values = np.arange(40).reshape(1, 10, 4).repeat(2, axis=0)
atten(np.ones((2, 1, 2)), keys, values, np.array([2, 6]))
if False:
atten = DotProductAttention(dropout=0.5)
atten.initialize()
keys = np.ones((3,10,5)) # keys in batch of size 3; for every line in batch we have
### 10 keys/values pairs ; where every key is 5 dimentional vector (and value will be 7 dimentional vector);
### each key is forming pair with value, there are 10 such pairs
values = np.arange(70).reshape(1,10,7).repeat(3, axis =0) # values in batch of
### size 3 ; 10 values with 7 dimentional vector each;
### in our batch the 3 samples are identical by construction
queries = np.ones((3,4,5)) # quiries in batch of size 3, there are 4 queries,
### where every query is vector of size 5 (same size as for key)
atten(queries, keys, values, np.array([3, 8, 6])) # values in batch of size 3 ;
### 4 quiry per every sample in batch where every query is vector of size 5
### the valid_len is 1-D
### for the 3 samples the valid_length have size 3 , 8 , 6 ;
### size 3 for first sample , ....., ..... size 6 for the last sample
### the outputs are:
### for every entry in the batch (for every of the 3 samples)
### for every of 4 queries
### total : 3*4 = 12 final values: vectors of size 7
### the values are different for different samples in the batch ,
### because we used different valid length,
### but for every sample group in the batch (same sample, different queries),
### all 4 final values are the same:
### even we use 4 queries, all the quiries are equal in our case
```
### Multihead Attention
""" The *multi-head attention* layer consists of $h$ parallel self-attention layers, each one is called a *head*. For each head, before feeding into the attention layer, we project the queries, keys, and values with three dense layers with hidden sizes $p_q$, $p_k$, and $p_v$, respectively. The outputs of these $h$ attention heads are concatenated and then processed by a final dense layer.

Assume that the dimension for a query, a key, and a value are $d_q$, $d_k$, and $d_v$, respectively. Then, for each head $i=1,\ldots, h$, we can train learnable parameters
$\mathbf W_q^{(i)}\in\mathbb R^{p_q\times d_q}$,
$\mathbf W_k^{(i)}\in\mathbb R^{p_k\times d_k}$,
and $\mathbf W_v^{(i)}\in\mathbb R^{p_v\times d_v}$. Therefore, the output for each head is
$$\mathbf o^{(i)} = \mathrm{attention}(\mathbf W_q^{(i)}\mathbf q, \mathbf W_k^{(i)}\mathbf k,\mathbf W_v^{(i)}\mathbf v),$$
where $\textrm{attention}$ can be any attention layer, such as the `DotProductAttention` and `MLPAttention` as we introduced in :numref:`sec_attention`.
After that, the output with length $p_v$ from each of the $h$ attention heads are concatenated to be an output of length $h p_v$, which is then passed the final dense layer with $d_o$ hidden units. The weights of this dense layer can be denoted by $\mathbf W_o\in\mathbb R^{d_o\times h p_v}$. As a result, the multi-head attention output will be
$$\mathbf o = \mathbf W_o \begin{bmatrix}\mathbf o^{(1)}\\\vdots\\\mathbf o^{(h)}\end{bmatrix}.$$
Now we can implement the multi-head attention. Assume that the multi-head attention contain the number heads `num_heads` $=h$, the hidden size `num_hiddens` $=p_q=p_k=p_v$ are the same for the query, key, and value dense layers. In addition, since the multi-head attention keeps the same dimensionality between its input and its output, we have the output feature size $d_o =$ `num_hiddens` as well. """ (citation from d2l.ai book).
There are some problems in the d2l.ai text, there is stated :
$p_q$ = $p_k$ = $p_v$ = num_hiddens,
and
$d_o =$ `num_hiddens` as well.
So, we have $W_o$ transformation from input of size (num_heads * num_hiddens) to output of size (num_hiddens). If h > 1, the input size and output size can not be equal. But in the PyTorch code in the d2l.ai we have:
self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
with equal input and output. It is hidden in the d2l.ai
MXNet code: self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False), because in the
case of Gluon Dense layer we state only output dimension (num_hiddens in the case). The input dimension is not stated.
There is also assumed in the code below (from d2l.ai book), the num_hiddens is multiple of num_heads. No assumptions about it in the main text of the book. But in the d2l.ai code the assumption is used.
The ony interpretation to the code below I may give now:
$p_v$ * num_heads=num_hiddens (same for $p_q$ = $p_k$ = $p_v$),
but not $p_v$=num_hiddens.
I will interpret the code with the assumption.
```
### from d2l.ai
class MultiHeadAttention(nn.Block):
def __init__(self, num_hiddens, num_heads, dropout, use_bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
### here, as I understand, the num_hiddens = num_heads * p_v
### where p_v (see the text above) is the dimension of the vector
### to which a query is transformed by single head,
### the size of p_v is to be (num_hidden/num_heads)
### it explains what the code below do
self.W_q = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_k = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_v = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### if every head transform query of size `dim` = num_hiddens to
### p_v = p_q = p_k = (num_hidden/num_heads), when we
### concatenate num_heads of such queries, we will get
### vector of size num_hidden again;
### it explains the input / output dimensions for W_o :
### input and output have same dimension = num_hiddens
self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### every query generate num_heads outputs , which we cancatenate in
### one vector of dimention num_hiddens : so the output of every query is
### of size num_heads / num_hiddens;
### to apply self-attention we de-cancatenate the combined output
### to hum_heads of separate outputs from every query
### with size (num_hiddens / num_heads), and
### simultaneously recombine them in single batch (with size num_heads),
### which increase the total batch size to (batch_size * num_heads)
### We have to correct the valid_length to take into account
### the num_heads query transformtions are combined now in single batch.
### After application of self_attention, we make the reverse operation:
### locate the batch samples which correspond to the outputs of the same query
### in different heads, and concatenate them again in one combined output.
### The number of batches decrease and the length of output increase by the
### same factor num_heads.
### These are the roles of transpose_qkv , transpose_output functions below:
def forward(self, query, key, value, valid_len):
# For self-attention, `query`, `key`, and `value` shape:
# (`batch_size`, `seq_len`, `dim`), where `seq_len` is the length of
# input sequence. `valid_len` shape is either (`batch_size`, ) or
# (`batch_size`, `seq_len`).
# Project and transpose `query`, `key`, and `value` from
# (`batch_size`, `seq_len`, `num_hiddens`) to
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
query = transpose_qkv(self.W_q(query), self.num_heads)
key = transpose_qkv(self.W_k(key), self.num_heads)
value = transpose_qkv(self.W_v(value), self.num_heads)
if valid_len is not None:
# Copy `valid_len` by `num_heads` times
if valid_len.ndim == 1:
valid_len = np.tile(valid_len, self.num_heads)
else:
valid_len = np.tile(valid_len, (self.num_heads, 1))
# For self-attention, `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = self.attention(query, key, value, valid_len)
# `output_concat` shape: (`batch_size`, `seq_len`, `num_hiddens`)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
### from d2l.ai
def transpose_qkv(X, num_heads):
# Input `X` shape: (`batch_size`, `seq_len`, `num_hiddens`).
# Output `X` shape:
# (`batch_size`, `seq_len`, `num_heads`, `num_hiddens` / `num_heads`)
X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)
# `X` shape:
# (`batch_size`, `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
X = X.transpose(0, 2, 1, 3)
# `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = X.reshape(-1, X.shape[2], X.shape[3])
return output
### from d2l.ai
def transpose_output(X, num_heads):
# A reversed version of `transpose_qkv`
X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])
X = X.transpose(0, 2, 1, 3)
return X.reshape(X.shape[0], X.shape[1], -1)
if False:
### from d2l.ai
### num_hiddens = 100, num_heads=10
cell = MultiHeadAttention(100, 10, 0.5)
cell.initialize()
X = np.ones((2, 4, 5))
valid_len = np.array([2, 3])
cell(X, X, X, valid_len).shape
if False:
### it correspond to scenario size of embedding is 512 ; num_heads = 8 ;
### num_hiddens = 512
cell = MultiHeadAttention(512, 8, 0.5)
cell.initialize()
# num of batches is 3 ; seq_len is 20 ; size of embedding is 512
X = np.ones((3, 20, 512))
valid_len = np.array([15,17,12])
cell(X, X, X, valid_len).shape
```
### Position-wise encoding
Two 1 * 1 convolutional layers are applied. Extract
position independent features of word representations (in the same way the convolution layers are applied in image recognition networks).
""" Similar to the multi-head attention, the position-wise feed-forward network will only change the last dimension size of the input—the feature dimension. In addition, if two items in the input sequence are identical, the according outputs will be identical as well. """ (citation from d2l.ai)
```
### from d2l.ai
class PositionWiseFFN(nn.Block):
def __init__(self, ffn_num_hiddens, pw_num_outputs, **kwargs):
super(PositionWiseFFN, self).__init__(**kwargs)
self.dense1 = nn.Dense(ffn_num_hiddens, flatten=False,
activation='relu')
self.dense2 = nn.Dense(pw_num_outputs, flatten=False)
def forward(self, X):
return self.dense2(self.dense1(X))
if False:
ffn = PositionWiseFFN(4, 8)
ffn.initialize()
ffn(np.ones((2, 3, 4)))[0]
```
### Add and Norm
""" we add a layer that contains a residual structure and a layer normalization after both the multi-head attention layer and the position-wise FFN network. Layer normalization is similar to batch normalization ........ One difference is that the mean and variances for the layer normalization are calculated along the last dimension, e.g X.mean(axis=-1) instead of the first batch dimension, e.g., X.mean(axis=0). Layer normalization prevents the range of values in the layers from changing too much, which allows faster training and better generalization ability. """ (citation from d2l.ai)
```
if False:
### from d2l.ai
layer = nn.LayerNorm()
layer.initialize()
batch = nn.BatchNorm()
batch.initialize()
X = np.array([[1, 2], [2, 3]])
# Compute mean and variance from `X` in the training mode
with autograd.record():
print('layer norm:', layer(X), '\nbatch norm:', batch(X))
```
"""AddNorm accepts two inputs X and Y. We can deem X as the original input in the residual network, and Y as the outputs from either the multi-head attention layer or the position-wise FFN network. In addition, we apply dropout on Y for regularization.""" citation from d2l.ai
```
### from d2l.ai
class AddNorm(nn.Block):
def __init__(self, dropout, **kwargs):
super(AddNorm, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
self.ln = nn.LayerNorm()
def forward(self, X, Y):
return self.ln(self.dropout(Y) + X)
if False:
### d2l.ai
add_norm = AddNorm(0.5)
add_norm.initialize()
add_norm(np.ones((2, 3, 4)), np.ones((2, 3, 4))).shape
```
### Positional Encoding
```
### I used the code as alternative to the original positional encoding;
### just encode position of words (tokens) in sentence ,
### it changes the results , but the results are quite well.
if False:
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=100):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
###
self.P = np.zeros((1, max_len, num_hiddens))
### X = np.arange(0, max_len).reshape(-1, 1) / np.power(
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
### self.P[:, :, 0::2] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
###################### my code be carefull !!!!!
X = np.arange(0, max_len).reshape(-1, 1) / max_len
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::1] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
################################
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=1000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
self.P = np.zeros((1, max_len, num_hiddens))
X = np.arange(0, max_len).reshape(-1, 1) / np.power(
10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::2] = np.sin(X)
self.P[:, :, 1::2] = np.cos(X)
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
if False:
### from d2l.ai
### num_hiddens = 20 , dropout = 0
pe = PositionalEncoding(20, 0)
pe.initialize()
### we assume batch_size = 1; max_length = 100 correspond to tokens (here words) in our line;
### num_hiddens = 20 (embedding size)
###
Y = pe(np.zeros((1, 100, 20)))
### dim correspond to coordinate in embedding vector of out tokens (words)
d2l.plot(np.arange(100), Y[0, :, 4:8].T, figsize=(6, 2.5),
legend=["dim %d" % p for p in [4, 5, 6, 7]])
```
### Encoder
"""Armed with all the essential components of Transformer, let us first build a Transformer encoder block. This encoder contains a multi-head attention layer, a position-wise feed-forward network, and two “add and norm” connection blocks. As shown in the code, for both of the attention model and the positional FFN model in the EncoderBlock, their outputs’ dimension are equal to the num_hiddens. This is due to the nature of the residual block, as we need to add these outputs back to the original value during “add and norm”. """ (citation from d2l.ai)
```
### from d2l.ai
### this block will not change the input shape
class EncoderBlock(nn.Block):
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias=False, **kwargs):
super(EncoderBlock, self).__init__(**kwargs)
self.attention = MultiHeadAttention(num_hiddens, num_heads, dropout,
use_bias)
self.addnorm1 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm2 = AddNorm(dropout)
def forward(self, X, valid_len):
### we sum the original input to the attention block and the output from the
### block + we normilize the result using AddNorm
Y = self.addnorm1(X, self.attention(X, X, X, valid_len))
return self.addnorm2(Y, self.ffn(Y))
```
""" Now it comes to the implementation of the entire Transformer encoder. With the Transformer encoder, $n$ blocks of `EncoderBlock` stack up one after another. Because of the residual connection, the embedding layer size $d$ is same as the Transformer block output size. Also note that we multiply the embedding output by $\sqrt{d}$ to prevent its values from being too small. """ (citation from d2l.ai)
```
### from d2l.ai
class Encoder(nn.Block):
"""The base encoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Encoder, self).__init__(**kwargs)
def forward(self, X, *args):
raise NotImplementedError
### from d2l.ai
class TransformerEncoder(Encoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, use_bias=False, **kwargs):
super(TransformerEncoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
self.blks = nn.Sequential()
for _ in range(num_layers):
self.blks.add(
EncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias))
### the order of steps:
### firstly we apply Positinal Encoding to initial word vectors
### FROM HERE: Then several times do:
### apply Multihead Attention
### apply Add Norm
### apply PositionWise transformation
### apply Add Norm
### and again... Go To FROM HERE
def forward(self, X, valid_len, *args):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X = blk(X, valid_len)
return X
```
### Decoder
""" During training, the output for the $t$-query could observe all the previous key-value pairs. It results in an different behavior from prediction. Thus, during prediction we can eliminate the unnecessary information by specifying the valid length to be $t$ for the $t^\textrm{th}$ query. """
(citation from d2l.ai)
```
### from d2l.ai
class DecoderBlock(nn.Block):
# `i` means it is the i-th block in the decoder
### the i will be initialized from the TransformerDecoder block
### the block will be used in TransformerDecoder in stack ,
### several blocks will be aranged in sequence, output from
### one block will be input to the next blosk
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads,
dropout, i, **kwargs):
super(DecoderBlock, self).__init__(**kwargs)
self.i = i
### in the block we will aplly (MultiHeadAttention + AddNorm)
### and again (MultiHeadAttention + AddNorm) ;
### then we will apply PositionWiseFFN
self.attention1 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm1 = AddNorm(dropout)
self.attention2 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm2 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm3 = AddNorm(dropout)
def forward(self, X, state):
### we use state [0] and state[1] to keep output from TransformerEncoder :
### enc_outputs and enc_valid_length;
### which correspond to sentences we are translating (sentences in language FROM
### which we translate);
### the state [0] and state [1] are received from TransformerDecoder
### enclosing block as shared parameter;
enc_outputs, enc_valid_len = state[0], state[1]
# `state[2][i]` contains the past queries for this block
### in the first block (i = 1) , at this place in code,
### the queries = None, see the code in TransformetEncoder :
###
### def init_state(self, enc_outputs, enc_valid_len, *args):
### return [enc_outputs, enc_valid_len, [None]*self.num_layers]
###
### TransformerEncoder is initialized from EncoderDecoder
### using the 'init_state' function (see above) , as
### we can see, the fird element in array is None for self.layers;
### the 'init_state' determines the 'state' in TransformerEncoder,
### in the code above we use state[0] and state[1] to determine
### 'enc_outputs', 'enc_valid_len' in this block
if state[2][self.i] is None:
key_values = X
else:
### queries are processed and concatenated and used as new
### grid of key/value pairs
key_values = np.concatenate((state[2][self.i], X), axis=1)
state[2][self.i] = key_values
if autograd.is_training():
### here are are in training mode
### below in 'attention' function we will use X as queries,
### X correspond to all words in the target sentence within training;
### seq_len correspond to the length of the whole target sentence;
### we will use seq_len queries, for every sample in the batch;
### for us important the following:
### first query from the sentence has to be constrained
### to first key_value pair; second: to the first two key_value pairs,
### etc...
### that is why the valid_len is generated in the way:
batch_size, seq_len, _ = X.shape
# Shape: (batch_size, seq_len), the values in the j-th column
# are j+1
### while training we take into account the result of passing a query
### in target sentence throught the 'grid' of key/value pairs to the
### left of the query ;
### every query in the target sequence has its own valid_len and
### the valid_len correspond to the position of a query in the
### sentence
valid_len = np.tile(np.arange(1, seq_len + 1, ctx=X.ctx),
(batch_size, 1))
else:
valid_len = None
### the attention mechanism is used on key_values corresponding
### to the target sentence key_values (then AddNorm is applied)
X2 = self.attention1(X, key_values, key_values, valid_len)
Y = self.addnorm1(X, X2)
### the attention mechanism is used on TransformerEncoder outputs
### key_values as the 'grid' (then AddNorm is applied);
### the key/values are the learned pairs
### which are originated from the source sentence
Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_len)
Z = self.addnorm2(Y, Y2)
return self.addnorm3(Z, self.ffn(Z)), state
### from d2l.ai
class Decoder(nn.Block):
"""The base decoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Decoder, self).__init__(**kwargs)
def init_state(self, enc_outputs, *args):
raise NotImplementedError
def forward(self, X, state):
raise NotImplementedError
### from d2l.ai
class TransformerDecoder(Decoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, **kwargs):
super(TransformerDecoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.num_layers = num_layers
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
### sequential application of several DecoderBlock's
self.blks = nn.Sequential()
for i in range(num_layers):
self.blks.add(
DecoderBlock(num_hiddens, ffn_num_hiddens, num_heads,
dropout, i))
self.dense = nn.Dense(vocab_size, flatten=False)
def init_state(self, enc_outputs, env_valid_len, *args):
return [enc_outputs, env_valid_len, [None]*self.num_layers]
def forward(self, X, state):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X, state = blk(X, state)
return self.dense(X), state
### from d2l.ai
### this block couples together TransformerEncoder and TransformerDecoder
###
class EncoderDecoder(nn.Block):
"""The base class for the encoder-decoder architecture."""
def __init__(self, encoder, decoder, **kwargs):
super(EncoderDecoder, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
def forward(self, enc_X, dec_X, *args):
### the enc_outputs are moved to decoder from encoder;
### the coupling happens in this point of code
enc_outputs = self.encoder(enc_X, *args)
### initial decoder state: dec_state is calculated using the dec_outputs
### and used as 'state' in TransformerDecoder
dec_state = self.decoder.init_state(enc_outputs, *args)
### use initial state + input dec_X to the decoder to calculate
### the decoder output
return self.decoder(dec_X, dec_state)
```
### Training
```
### from d2l.ai
### because of the padding (and valid_length) we have to filter out some entries
class MaskedSoftmaxCELoss(gluon.loss.SoftmaxCELoss):
# `pred` shape: (`batch_size`, `seq_len`, `vocab_size`)
# `label` shape: (`batch_size`, `seq_len`)
# `valid_len` shape: (`batch_size`, )
def forward(self, pred, label, valid_len):
# weights shape: (batch_size, seq_len, 1)
weights = np.expand_dims(np.ones_like(label), axis=-1)
weights = npx.sequence_mask(weights, valid_len, True, axis=1)
return super(MaskedSoftmaxCELoss, self).forward(pred, label, weights)
if False:
### from d2l.ai
loss = MaskedSoftmaxCELoss()
loss(np.ones((3, 4, 10)), np.ones((3, 4)), np.array([4, 2, 0]))
### from d2l.ai
### prevents too high gradients
def grad_clipping(model, theta):
"""Clip the gradient."""
if isinstance(model, gluon.Block):
params = [p.data() for p in model.collect_params().values()]
else:
params = model.params
norm = math.sqrt(sum((p.grad ** 2).sum() for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
### from d2l.ai
### accumulate results in one array, auxiliary function
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
### from d2l.ai
def train_s2s_ch9(model, data_iter, lr, num_epochs, device):
model.initialize(init.Xavier(), force_reinit=True, ctx=device)
trainer = gluon.Trainer(model.collect_params(),
'adam', {'learning_rate': lr})
loss = MaskedSoftmaxCELoss()
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], ylim=[0, 1.00])
for epoch in range(1, num_epochs + 1):
timer = d2l.Timer()
metric = d2l.Accumulator(2) # loss_sum, num_tokens
### use data_iter from load_data_nmt to get X and Y which include:
### the source and target
### sentence representations + X_vlen and Y_vlen : the valid lenghts of
### the sentencies
for batch in data_iter:
X, X_vlen, Y, Y_vlen = [x.as_in_ctx(device) for x in batch]
Y_input, Y_label, Y_vlen = Y[:, :-1], Y[:, 1:], Y_vlen-1
with autograd.record():
Y_hat, _ = model(X, Y_input, X_vlen, Y_vlen)
l = loss(Y_hat, Y_label, Y_vlen)
l.backward()
grad_clipping(model, 1)
num_tokens = Y_vlen.sum()
trainer.step(num_tokens)
metric.add(l.sum(), num_tokens)
if epoch % 10 == 0:
animator.add(epoch, (metric[0]/metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} '
f'tokens/sec on {str(device)}')
```
### Reading and Processing the Text
```
### from d2l.ai
def download_extract(name, folder=None):
"""Download and extract a zip/tar file."""
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == '.zip':
fp = zipfile.ZipFile(fname, 'r')
elif ext in ('.tar', '.gz'):
fp = tarfile.open(fname, 'r')
else:
assert False, 'Only zip/tar files can be extracted.'
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
```
""" ... a dataset that contains a set of English sentences with the corresponding French translations. As can be seen that each line contains an English sentence with its French translation, which are separated by a TAB.""" (citation from d2l.ai)
```
### d2l.ai
### the data for the translation are prepared by the d2l.ai project (book)
d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip',
'94646ad1522d915e7b0f9296181140edcf86a4f5')
def read_data_nmt():
data_dir = d2l.download_extract('fra-eng')
with open(os.path.join(data_dir, 'fra.txt'), 'r') as f:
return f.read()
raw_text = read_data_nmt()
print(raw_text[0:106])
### from d2l.ai
def preprocess_nmt(text):
def no_space(char, prev_char):
return char in set(',.!') and prev_char != ' '
text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower()
out = [' ' + char if i > 0 and no_space(char, text[i-1]) else char
for i, char in enumerate(text)]
return ''.join(out)
### from d2l.ai
text = preprocess_nmt(raw_text)
print(text[0:95])
### from d2l.ai
def tokenize_nmt(text, num_examples=None):
source, target = [], []
for i, line in enumerate(text.split('\n')):
if num_examples and i > num_examples:
break
parts = line.split('\t')
if len(parts) == 2:
source.append(parts[0].split(' '))
target.append(parts[1].split(' '))
return source, target
### from d2l.ai
source, target = tokenize_nmt(text)
source[0:3], target[0:3]
```
#### Histogram of the number of tokens per sentence
There are mostly 5 token sentencies, num of tokens is
usually less than 10..15.
```
### from d2l.ai
d2l.set_figsize()
d2l.plt.hist([[len(l) for l in source], [len(l) for l in target]],
label=['source', 'target'])
d2l.plt.legend(loc='upper right');
```
### Vocabulary
```
### from d2l.ai
def count_corpus(tokens):
"""Count token frequencies."""
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
### from d2l.ai
class Vocab:
"""Vocabulary for text."""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[0])
self.token_freqs.sort(key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ['<unk>'] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs
if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
### from d2l.ai
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
len(src_vocab)
```
### Loading the dataset
```
### from d2l.ai
def truncate_pad(line, num_steps, padding_token):
if len(line) > num_steps:
return line[:num_steps] # Trim
return line + [padding_token] * (num_steps - len(line)) # Pad
### the <pad> is represented by number 1 in Vocabuary
### from d2l.ai
truncate_pad(src_vocab[source[0]], 10, src_vocab['<pad>'])
### from d2l.ai
def build_array(lines, vocab, num_steps, is_source):
lines = [vocab[l] for l in lines]
if not is_source:
lines = [[vocab['<bos>']] + l + [vocab['<eos>']] for l in lines]
array = np.array([truncate_pad(
l, num_steps, vocab['<pad>']) for l in lines])
valid_len = (array != vocab['<pad>']).sum(axis=1)
return array, valid_len
### from d2l.ai
def load_array(data_arrays, batch_size, is_train=True):
"""Construct a Gluon data iterator."""
dataset = gluon.data.ArrayDataset(*data_arrays)
return gluon.data.DataLoader(dataset, batch_size, shuffle=is_train)
### from d2l.ai
### quite importand function to construct dataset for training (data_iter)
### from original data
def load_data_nmt(batch_size, num_steps, num_examples=76800):
text = preprocess_nmt(read_data_nmt())
source, target = tokenize_nmt(text, num_examples)
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
tgt_vocab = Vocab(target, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
src_array, src_valid_len = build_array(
source, src_vocab, num_steps, True)
tgt_array, tgt_valid_len = build_array(
target, tgt_vocab, num_steps, False)
data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len)
data_iter = load_array(data_arrays, batch_size)
return src_vocab, tgt_vocab, data_iter
### from d2l.ai
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
return npx.gpu(i) if npx.num_gpus() >= i + 1 else npx.cpu()
```
### Model: training and prediction
```
### the code from d2l.ai
### estimate the execution time for the cell in seconds
start = time.time()
num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.0, 128, 10
lr, num_epochs, device = 0.001, 500, try_gpu()
ffn_num_hiddens, num_heads = 64, 4 ### num_hiddens is to be a multiple of num_heads !!
src_vocab, tgt_vocab, train_iter = load_data_nmt(batch_size, num_steps,76800)
encoder = TransformerEncoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
decoder = TransformerDecoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
model = EncoderDecoder(encoder, decoder)
train_s2s_ch9(model, train_iter, lr, num_epochs, device)
### estimate the execution time for the cell
end = time.time()
print(end - start)
### from d2l.ai
def predict_s2s_ch9(model, src_sentence, src_vocab, tgt_vocab, num_steps,
device):
src_tokens = src_vocab[src_sentence.lower().split(' ')]
enc_valid_len = np.array([len(src_tokens)], ctx=device)
src_tokens = truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
enc_X = np.array(src_tokens, ctx=device)
# Add the batch size dimension
enc_outputs = model.encoder(np.expand_dims(enc_X, axis=0),
enc_valid_len)
dec_state = model.decoder.init_state(enc_outputs, enc_valid_len)
dec_X = np.expand_dims(np.array([tgt_vocab['<bos>']], ctx=device), axis=0)
predict_tokens = []
for _ in range(num_steps):
Y, dec_state = model.decoder(dec_X, dec_state)
# The token with highest score is used as the next time step input
dec_X = Y.argmax(axis=2)
py = dec_X.squeeze(axis=0).astype('int32').item()
if py == tgt_vocab['<eos>']:
break
predict_tokens.append(py)
return ' '.join(tgt_vocab.to_tokens(predict_tokens))
for sentence in ['Go .', 'Wow !', "I'm OK .", 'I won !',
'Let it be !', 'How are you ?', 'How old are you ?',
'Cats are cats, dogs are dogs .', 'My friend lives in US .',
'He is fifty nine years old .', 'I like music and science .',
'I love you .', 'The dog is chasing the cat .',
'Somewhere on the earth .', 'Do not worry !',
'Sit down, please !', 'Not at all !', 'It is very very strange .',
'Take it into account .', 'The dark side of the moon .',
'Come on !', 'We are the champions, my friends .']:
print(sentence + ' => ' + predict_s2s_ch9(
model, sentence, src_vocab, tgt_vocab, num_steps, device))
```
| github_jupyter |
# Scraping Comic Book Covers
**Goal**: Scrape comic covers so can use them as visual touchstones for users in the app.
### Libraries
```
import psycopg2 as psql # PostgreSQL DBs
from sqlalchemy import create_engine # SQL helper
import pandas as pd
import requests
import random
import time
import os
import sys
# Selenium
from selenium.webdriver import Firefox
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = False
# Data storage
sys.path.append("..")
!pip install boto3
# Custom
import data_fcns as dfc
import keys as keys # Custom keys lib
import comic_scraper as cs
```
### Initialize Browser
```
driver_exe_path = os.path.join(
os.getcwd(), 'drivers', 'geckodriver-windows.exe')
```
driver_exe_path = os.path.join(
os.getcwd(), 'drivers', 'geckodriver')
```
driver_exe_path
```
ls drivers/
```
browser = Firefox(options=options, executable_path=driver_exe_path)
url = "http://www.comicbookdb.com/"
browser.get(url)
```
### Make list of Titles!
Get list of titles to scrape covers.
```
# Define path to secret
secret_path_aws = os.path.join(os.environ['HOME'], '.secret',
'aws_ps_flatiron.json')
secret_path_aws
aws_keys = keys.get_keys(secret_path_aws)
user = aws_keys['user']
ps = aws_keys['password']
host = aws_keys['host']
db = aws_keys['db_name']
aws_ps_engine = ('postgresql://' + user + ':' + ps + '@' + host + '/' + db)
# Setup PSQL connection
conn = psql.connect(
database=db,
user=user,
password=ps,
host=host,
port='5432'
)
# Instantiate cursor
cur = conn.cursor()
# Count records.
query = """
SELECT * from comic_trans;
"""
# Execute the query
cur.execute(query)
# Check results
temp_df = pd.DataFrame(cur.fetchall())
temp_df.columns = [col.name for col in cur.description]
temp_df.head(3)
temp_df['title'] = (temp_df['title_and_num'].apply(dfc.cut_issue_num))
temp_df.head()
temp_df['title'] = (temp_df['title'].apply(lambda x: x.replace('&', 'and'))
.apply(lambda x: x.replace('?', ''))
.apply(lambda x: x.replace('/', ' '))
)
```
### We need to track the titles that need scraping.
```
titles = list(temp_df['title'].unique())
ctr = ( 77 + 45 + 1318 + 1705 + 3 + 284 + 372 + 104 + 89 + 646 + 101 + 39 +
33 + 78 + 352 + 400 + 649
)
ctr
titles_test = titles[ctr:]
titles_test
cs.scrape_series_covers(browser, titles_test)
titles_test
test_title = 'Vampironica'
search_title(browser, test_title)
click_first_link(browser, test_title, True)
go_cover_gallery(browser)
click_first_image(browser)
click_cover_image(browser)
"""
Find the cover image and click it!"""
cover_img_path = ('/html/body/table/tbody/tr[2]/td[3]/table/tbody/tr/' +
'td/table[1]/tbody/tr[1]/td[1]/a[1]/img')
cover_img = browser.find_element_by_xpath(cover_img_path)
cover_img.click()
# cover_img.click()
url = cover_img.get_attribute('src')
cover_img.get_attribute
print(url)
cover_box_path = '/html/body/table/tbody/tr[2]/td[3]/table/tbody/tr/td/table[1]/tbody/tr[1]/td[1]/a[1]'
cover_box = browser.find_element_by_xpath(cover_box_path)
url = cover_box.get_attribute('href')
save_large_image(browser, test_title)
```
### Update the code to scrape the large images.
```
def scrape_series_covers(browser, titles):
"""Use Selenium to scrape images for comic book titles"""
start_time = time.time()
for idx, title in enumerate(titles):
# Search for the title
search_title(browser, title)
if not no_results_found(browser):
# Once on search results, just select first issue of results
click_first_link(browser, title, True)
# Go to the cover gallery of issue page
go_cover_gallery(browser)
# Once in cover gallery, just scrape the first image
try:
# get_first_image(browser, title)
click_first_image(browser)
click_cover_image(browser)
save_large_image(browser, title)
print("Scraped {}.{}!".format(idx, title))
except NoSuchElementException:
print("{}.{} was skipped. No covers were found."
.format(idx, title))
# Go back to homepage so can do it again!
# go_back_home_comicbookdb(browser)
else:
print("{}.{} was skipped. No title matched.".format(idx, title))
# Wait random time
time.sleep(2 + random.random()*5)
print('Total Runtime: {:.2f} seconds'.format(time.time() - start_time))
# print("All done!")
def no_results_found(browser):
"""Return no result found if path fails"""
xpath = '/html/body/table/tbody/tr[2]/td[3]'
result = browser.find_element_by_xpath(xpath)
return result.text == 'No results found.'
def search_title(browser, title):
"""
Given Selenium browser obj and a comic title to search for
Enter title into search box and Search
"""
# Find search box and enter search text
text_area = browser.find_element_by_id('form_search')
text_area.send_keys(Keys.CONTROL, "a")
text_area.send_keys(title)
# Find Search type dropdown and make sure it says 'Title'
search_type = Select(browser.find_element_by_name('form_searchtype'))
search_type.select_by_value('Title')
# Push the search button!
sb_xpath = ('/html/body/table/tbody/tr[2]/td[1]' +
'/table/tbody/tr[4]/td/form/input[2]')
search_button = browser.find_element_by_xpath(sb_xpath)
search_button.click()
def search_site(browser, title):
"""
Given Selenium browser obj and a comic title to search for
Enter title into search box and Search
"""
# Find search box and enter search text
text_area = browser.find_element_by_id('form_search')
text_area.send_keys(Keys.CONTROL, "a")
text_area.send_keys(title)
# Find Search type dropdown and make sure it says 'Title'
# Push the search button!
sb_xpath = ('/html/body/table/tbody/tr[2]/td[1]' +
'/table/tbody/tr[4]/td/form/input[2]')
search_button = browser.find_element_by_xpath(sb_xpath)
search_button.click()
def click_first_link(browser, title, title_search_flag):
"""
Find first issue link and click it
"""
# Find first issue link in search results
if title_search_flag:
x_path = '/html/body/table/tbody/tr[2]/td[3]/a[1]'
else:
x_path = '/html/body/table/tbody/tr[2]/td[3]/table/tbody/tr/td/a[1]'
first_issue_link = browser.find_element_by_xpath(x_path)
# Click
first_issue_link.click()
def go_cover_gallery(browser):
"""
Click on Cover Gallery button
"""
gb_xpath = ("/html/body/table/tbody/tr[2]/td[3]/table[1]" +
"/tbody/tr/td/a[4]/img"
)
gb_xpath = '//a[img/@src="graphics/button_title_covergallery.gif"]'
gallery_btn = browser.find_element_by_xpath(gb_xpath)
gallery_btn.click()
def click_first_image(browser):
"""
Find first image in cover gallery and click it!
"""
# Find first image
first_img_path = ('/html/body/table/tbody/tr[2]/td[3]/' +
'table/tbody/tr[1]/td[1]/a/img')
first_img = browser.find_element_by_xpath(first_img_path)
first_img.click()
def click_cover_image(browser):
"""
Find the cover image and click it!"""
cover_img_path = ('/html/body/table/tbody/tr[2]/td[3]/table/tbody/tr/' +
'td/table[1]/tbody/tr[1]/td[1]/a[1]/img')
cover_img = browser.find_element_by_xpath(cover_img_path)
cover_img.click()
# url = cover_img.get
def save_large_image(browser, title):
"""
Assuming you are on page with large cover image, scrape it
"""
# cover_img_path = ('/html/body/img')
# cover_img = browser.find_element_by_xpath(cover_img_path)
cover_box_path = ('/html/body/table/tbody/tr[2]/td[3]/table/tbody/tr/' +
'td/table[1]/tbody/tr[1]/td[1]/a[1]')
cover_box = browser.find_element_by_xpath(cover_box_path)
url = cover_box.get_attribute('href')
# Construct path and file name
filename = ('./raw_data/covers_large/' + title.replace(' ', '_').lower()
+ '.jpg'
)
# Save the file in the file/path
scrape_image_url(url, filename)
def scrape_image_url(url, filename):
"""Save an image element as filename"""
response = requests.get(url)
img_data = response.content
with open(filename, 'wb') as f:
f.write(img_data)
def get_first_image(browser, title):
"""
Find first image in cover gallery and scrape it!
"""
# Find first image
first_img_path = ('/html/body/table/tbody/tr[2]/td[3]/' +
'table/tbody/tr[1]/td[1]/a/img')
first_img = browser.find_element_by_xpath(first_img_path)
# Construct path and file name
filename = ('./raw_data/covers/' + title.replace(' ', '_').lower()
+ '.jpg'
)
# Save the file in the file/path
scrape_image(first_img, filename)
return
def scrape_image(img, filename):
"""Save an image element as filename"""
response = requests.get(img.get_attribute('src'))
img_data = response.content
with open(filename, 'wb') as f:
f.write(img_data)
def go_back_home_comicbookdb(browser):
"""Go directly back to comicbookdb.com home via logolink"""
# Find image link to go back home
home_pg_xpath = ('/html/body/table/tbody/tr[1]/td/table/tbody' +
'/tr[1]/td/table/tbody/tr/td[1]/a/img')
logo_btn = browser.find_element_by_xpath(home_pg_xpath)
# Click!
logo_btn.click()
sample_titles = titles[:300]
sample_titles
```
Get list, sorted by qty sold
```
qtys = temp_df.groupby(['title'], as_index=False).qty_sold.sum(
).sort_values(by=['qty_sold'], ascending=False)
qtys.head()
```
#### ...And scraping periodically fails. Have manually tracked the 'stopping' point.
```
done_titles = titles[:300]
titles_needed_df = qtys.loc[~qtys['title'].isin(done_titles)]
titles_needed_df.shape
titles_need_list = list(titles_needed_df.title.unique())
# 367+246+151
827+151+376+524+5+47+1662+3+162+155+15+295+927+143+60
new_start = 5352 # 1932
titles_searching = titles_need_list[new_start:]
titles_searching
```
## It's the Scraping.
```
# for title in sample_titles:
# # print(title)
cs.scrape_series_covers(browser, titles_searching)
```
| github_jupyter |
Script for plotting the density of pixel intensities
```
#import packages
import pandas as pd
import struct
import xml.etree.ElementTree as ET
from copy import copy
from pathlib import Path
import numpy as np
import os
from glob import glob
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import chain
## display options
pd.set_option('display.max_columns', 500)
pd.set_option("display.max_colwidth",50)
```
### Load data
```
#set path for input and output
path = '.../from_ilastik/' #input
outpath = '.../Halo_intensity/merged_df_3d_background.csv' #output
# select search terms to identify the samples, here: 19 = 'dpy-21 (e428)' and halo = 'wild-type'
cnds = [ '19', 'halo']
# find csvs that match the seach term
csv_paths = [glob(os.path.join(path,f'2*{c}*hist2.csv')) for c in cnds]
#overview of found data
len(csv_paths), len(csv_paths[0]), len(csv_paths[1])
#one example file
pd.read_csv(csv_paths[0][0]).tail(5)
```
### Get all pixel values in all csvs
(per condition) into one list
```
dfs_temp = [pd.concat([pd.read_csv(p) for p in c_paths]) for c_paths in csv_paths]
len(dfs_temp), dfs_temp[0].shape, dfs_temp[1].shape
dfs_temp[0]
vals_all_per_cnd = [list(chain(*[[v]*df["count"].tolist()[i]
for i,v in enumerate(df["index"].tolist())])) for df in dfs_temp ]
len(vals_all_per_cnd)
vals_all_per_cnd
df_pix_vals = pd.DataFrame({f'{cnds[i]}_n{len(csv_paths[i])}':pd.Series(vals_all_per_cnd[i]) for i in range(len(cnds))})
df_pix_vals.head(3)
df_pix_vals.tail(10)
max_pix_vals = df_pix_vals.max()
max_pix_vals
val_count_per_series = []
for c in df_pix_vals.columns:
## Get count after dividing by number of images per condition
## Its a list of series:
## Each list item (a series) is a protein.
## In each series we have the pixel value as an index and the final count it should have as value
val_count_per_series.append(df_pix_vals[c].value_counts().divide(int(c.split("_n")[1])))
val_count_per_series
## Make new dataframe with the mean count of pixel values:
df_mean_count_pixval = pd.DataFrame({})
for i,c in enumerate(df_pix_vals.columns):
series = val_count_per_series[i].astype(int).copy()
series = series[series>0]
#print(list(chain(*[[idx]*count for idx,count in series.items()])))
df_mean_count_pixval[c] = pd.Series(list(chain(*[[idx]*count for idx,count in series.items()])))
df_mean_count_pixval.head(10)
#select the colors for plotting
color_dict = {
'19_n31': "#52079C",
'halo_n17': "#8AB17D",
}
#save to outpath
df_mean_count_pixval.to_csv(outpath, index=False)
# plot the pixel intensities
fig, ax = plt.subplots(figsize=(10, 6))
#removing top and right borders
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
for col in df_mean_count_pixval.columns:
ax = sns.distplot(df_mean_count_pixval[col],
hist=False,
label= col,
color=color_dict[col],
kde_kws = {'alpha':0.6, 'shade': True} )
plt.legend(prop={'size': 25}, frameon=False, labels=['dpy-21 (e428)', 'wild-type'])
plt.xticks(fontsize=25)
plt.yticks(fontsize=25)
#plt.yscale('log')
plt.title('Density plot of pixel intensities', fontsize=15)
plt.xlabel('Binned pixel intensity', fontsize=25)
plt.ylabel('Density', fontsize=25)
#plt.xlim(-10,60)
plt.tight_layout()
plt.savefig('Pixel_intensity_values_for_DPY-27-halo.svg', dpi=100)
plt.show()
```
| github_jupyter |
# Loading Data
In this workbook we will go over how you can load the data for our exercises
We will be using the data from the PHUSE Open Data Repository
Follow the instructions in [README.md](../README.md) to get setup
## Data processing in Python
There are a couple of key libraries we will use:
* [pandas](https://pandas.pydata.org/) - for processing data
* [matplotlib](https://matplotlib.org/) - for creating visual representations of the data
* [lxml](https://lxml.de) - processing the define.xml (or any other XML)
You will find that in the majority of cases someone will have written a module to do what you want to do; all you need to do is be able to find it, and if necessary validate it. Python Packages are published into the Python Package Index [PyPI](https://pypi.org) so you can search for a module using keywords, for example:
* [Bayesian Analysis](https://pypi.org/search/?q=bayesian)
* [Linear Regression](https://pypi.org/search/?q=linear+regression)
* [ODM](https://pypi.org/search/?q=cdisc+odm)
You can also create your own package repository or build packages from a git repository; this is a good way for a company to facilitate the building out of standard libraries for internal use or building out a validated Python module repository.
```
# import the libraries we are going to use
# Pandas data handling library
import pandas as pd
from pandas import DataFrame
# Typing allows you to be typesafe with Python
from typing import Optional
# URLlib is the built in Library for working with the web
import urllib
# requests is a mode
import requests
# lxml is a library for processing XML documents
from lxml import etree
from lxml.etree import _ElementTree
# define a prefix for where the files can be found
PREFIX = "https://github.com/phuse-org/phuse-scripts/raw/master/data/sdtm/cdiscpilot01/"
def check_link(url: str) -> bool:
"""
ensure that the URL exists
:param url: The target URL we will attempt to load
"""
# this will attempt to open the URL, and extract the response status code
# - status codes are a HTTP convention for responding to requests
# 200 - OK
# 403 - Not authorized
# 404 - Not found
status_code = urllib.request.urlopen(url).getcode()
return status_code == 200
def load_cdiscpilot_dataset(domain_prefix: str) -> Optional[DataFrame]:
"""
load a CDISC Pilot Dataset from the GitHub site
:param domain_prefix: The two letter Domain prefix that is used to id the dataset
"""
# define the target for our read_sas directive
target = f"{PREFIX}{domain_prefix.lower()}.xpt"
# make sure that the URL exists first
if check_link(target):
# load in the dataset
dataset = pd.read_sas(target, encoding="utf-8")
# dataset = pd.read_sas(target)
return dataset
return None
def load_cdiscpilot_define() -> _ElementTree:
"""
load the define.xml for the CDISC Pilot project
"""
# define the target for our read_sas directive
target = f"{PREFIX}define.xml"
# make sure that the URL exists first
if check_link(target):
# load in the file
page = requests.get(target)
tree = etree.fromstring(page.content)
# dataset = pd.read_sas(target)
return tree
return None
# Load in a dataset - DM
dm = load_cdiscpilot_dataset('DM')
# Take a look at a table
dm.head()
# Generate a Frequency Table for SEX
pd.crosstab(index=dm["SEX"], columns='count', colnames=["SEX"])
# a two-way frequency table (Age by Sex)
pd.crosstab(index=dm["AGE"], columns=dm["SEX"])
# Distribution of ages for gender
pd.crosstab(index=dm["AGE"], columns=dm["SEX"]).plot.bar()
# Generate age distributions
bins = [50, 55, 60, 65, 70, 75, 80, 85, 90]
labels = ["50-55", "55-60", "60-65", "65-70", "70-75", "75-80", "80-85", "85-90"]
dm["AGEBAND"] = pd.cut(dm['AGE'], bins=bins, labels=labels)
# Plot the data using bands
pd.crosstab(index=dm["AGEBAND"], columns=dm["SEX"]).plot.bar()
# Load the VS dataset
vs = load_cdiscpilot_dataset('VS')
# Details on the VS dataset
vs.shape
print(f"Dataset VS has {vs.shape[0]} records")
# Get the first ten rows
vs.loc[0:10]
# Generate a distribution for the values
tests = vs.groupby("VSTESTCD")["VSORRES"].sum()
print(tests)
# Weird right? We need to check the type of the column
vs.dtypes
# ok, that makes sense - an object is not numeric....
tests = vs.groupby("VSTESTCD")["VSSTRESN"].mean().reset_index()
print(tests)
# Lets join the DM dataset
labelled = vs.merge(dm, on="USUBJID")
labelled.head()
labelled_tests = labelled.groupby(["VSTESTCD","SEX", "AGEBAND"])["VSSTRESN"].mean().reset_index()
print(labelled_tests)
# now, let's look at the define
# the way we do this is to load the content from the URL, and then pass it off to the XML parsing library
odm = load_cdiscpilot_define()
# XML documents can be treated as a tree,
# * root item (root)
# * elements (branches)
# * attributes (leaves)
# In this case we have a root item that is an CDISC Operational Data Model (ODM)
# `tag` is the way of working out what type of element we have
print(odm.tag)
# we can look at the attributes using the .get method
print(odm.get("FileOID"))
print(odm.get("CreationDateTime"))
```
# Namespaces
XML documents use a schema document to define what elements/attributes are permissible (or required/expected). It is possible to extend a schema to incorporate extra elements/attributes; these attributes exist alongside the existing elements by having them under different namespaces
```
# look at the namespaces
print(odm.nsmap)
# in this the default namespace is ODM 1.2, with define.xml present in the def namespace
# let's get the MetadataVersion element
nsmap = odm.nsmap
nsmap["ODM"] = odm.nsmap.get(None)
mdv = odm.find(".//ODM:MetaDataVersion", nsmap)
# let's take a look at the define attributes
define_ns = nsmap.get('def')
print(define_ns)
# get the define version
for attribute in ("DefineVersion", "StandardName", "StandardVersion"):
# attributes should be prefixed with the namespace
attr = f"{{{define_ns}}}{attribute}"
if mdv.get(attr):
print(f"{attribute} -> {mdv.get(attr)}")
# Remember the Standard Version here! We will come back to it
# you can scan over the different child elements using the findall method
for itemdef in mdv.findall("./ODM:ItemDef", namespaces=nsmap):
if itemdef.find("./ODM:CodeListRef", namespaces=nsmap) is not None:
codelistref = itemdef.find("./ODM:CodeListRef", namespaces=nsmap)
print(f"Item {itemdef.get('OID')} has CodeList: {codelistref.get('CodeListOID')}")
else:
print(f"Item: {itemdef.get('OID')}")
```
Loading XML is a very useful technique, this example is a simple load and navigate of the define data structure. I recommend checking out [odmlib](https://pypi.org/project/odmlib/) which is a library that makes processing and manipulation of CDISC ODM documents much more straight forward (written by the venerable [Sam Hume](https://github.com/swhume))
# Summary
In this set we've gone over some elementary activities dealing accessing/loading data; there were some elementary expeditions into how data can be manipulated/visualised using pandas and some simple navigation of an XML document.
Next we're going to take a look at how accessing data over the web works.
| github_jupyter |
```
import json
import os
import re
def open_file():
p = os.path.expanduser('~/cltk_data/originals/tlg/AUTHTAB.DIR')
with open(p, 'rb') as fo:
return fo.read()
file_bytes = open_file()
# From Diogenes; useful?
# my $regexp = qr!$prefix(\w\w\w\d)\s+([\x01-\x7f]*[a-zA-Z][^\x83\xff]*)!;
c1 = re.compile(b'\x83g')
body = c1.split(file_bytes)[1]
c2 = re.compile(b'\xff')
id_authors = [x for x in c2.split(body) if x]
def make_id_author_pairs():
comp = re.compile(b'\s')
for id_author_raw in id_authors:
id_author_split = comp.split(id_author_raw, maxsplit=1)
if len(id_author_split) is 2:
_id, author = id_author_split[0], id_author_split[1]
# cleanup author name
comp2 = re.compile(b'&1|&')
author = id_author_split[1]
author = comp2.sub(b'', author)
comp3 = re.compile(b'\[2')
comp4 = re.compile(b'\]2')
author = comp3.sub(b'[', author)
author = comp4.sub(b']', author)
# normalize whitespaces
#comp5 = re.compile('\s+')
#author = comp5.sub(' ', author)
# cleanup odd bytecodes
comp7 = re.compile(b'\x80')
if comp7.findall(author):
author = comp7.sub(b', ', author)
# cleanup odd bytecodes
comp8 = re.compile(b'\x83e')
if comp8.findall(author):
author = comp8.sub(b'', author)
# transliterate beta code in author fields
# it's way easier to manually do these three
# Note that the converted bytes will now be str
comp6 = re.compile(b'\$1')
if comp6.findall(author):
if author == b'Dialexeis ($1*DISSOI\\ LO/GOI)':
author = 'Dialexeis (Δισσοὶ λόγοι)'
elif author == b'Dionysius $1*METAQE/MENOS Phil.':
author = 'Dionysius Μεταθέμενος Phil.'
elif author == b'Lexicon $1AI(MWDEI=N':
author = 'Lexicon αἱμωδεῖν'
# convert to str for final stuff
_id = _id.decode('utf_8')
if type(author) is bytes:
author = author.decode('utf_8')
if '+' in author:
author = author.replace('e+', 'ë')
author = author.replace('i+', 'ï')
yield (_id, author)
id_author_dict = {}
for k, v in make_id_author_pairs():
id_author_dict[k] = v
write_dir = os.path.expanduser('~/cltk/cltk/corpus/greek/tlg')
write_path = os.path.join(write_dir, 'id_author.json')
with open(write_path, 'w') as file_open:
json.dump(id_author_dict, file_open, sort_keys=True, indent=4, separators=(',', ': '))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.