text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
**Reinforcement Learning with Q Learning**
* This notebook shows how to apply the classic Reinforcement Learning (RL) idea of Q learning
* In TD learning we estimated state values: V(s). In Q learning we estimate Q values: Q(s,a). Here we'll go over Q learning in the simple tabular case.
* A key concept in RL is exploration. We'll use epsilon greedy exploration, which is often used with Q learning.
Outline:
1. Define the GridWorld environment
2. Discuss Epsilon-Greedy Exploration
3. Find the value of each Q value in the environment using Q learning
**GridWorld**
The GridWorld environment is a four by four grid. The agent randomly starts on the grid and can move either up, left, right, or down. If the agent reaches the upper left or lower right the episode is over. Every action the agent takes gets a reward of -1 until you reach the upper left or over right.
```
#Environment from: https://github.com/dennybritz/reinforcement-learning/blob/cee9e78652f8ce98d6079282daf20680e5e17c6a/lib/envs/gridworld.py
#define the environment
import io
import numpy as np
import sys
from gym.envs.toy_text import discrete
import pprint
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
class GridworldEnv(discrete.DiscreteEnv):
"""
Grid World environment from Sutton's Reinforcement Learning book chapter 4.
You are an agent on an MxN grid and your goal is to reach the terminal
state at the top left or the bottom right corner.
For example, a 4x4 grid looks as follows:
T o o o
o x o o
o o o o
o o o T
x is your position and T are the two terminal states.
You can take actions in each direction (UP=0, RIGHT=1, DOWN=2, LEFT=3).
Actions going off the edge leave you in your current state.
You receive a reward of -1 at each step until you reach a terminal state.
"""
metadata = {'render.modes': ['human', 'ansi']}
def __init__(self, shape=[4,4]):
if not isinstance(shape, (list, tuple)) or not len(shape) == 2:
raise ValueError('shape argument must be a list/tuple of length 2')
self.shape = shape
nS = np.prod(shape)
nA = 4
MAX_Y = shape[0]
MAX_X = shape[1]
P = {}
grid = np.arange(nS).reshape(shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
# P[s][a] = (prob, next_state, reward, is_done)
P[s] = {a : [] for a in range(nA)}
is_done = lambda s: s == 0 or s == (nS - 1)
reward = 0.0 if is_done(s) else -1.0
#reward = 1.0 if is_done(s) else 0.0
# We're stuck in a terminal state
if is_done(s):
P[s][UP] = [(1.0, s, reward, True)]
P[s][RIGHT] = [(1.0, s, reward, True)]
P[s][DOWN] = [(1.0, s, reward, True)]
P[s][LEFT] = [(1.0, s, reward, True)]
# Not a terminal state
else:
ns_up = s if y == 0 else s - MAX_X
ns_right = s if x == (MAX_X - 1) else s + 1
ns_down = s if y == (MAX_Y - 1) else s + MAX_X
ns_left = s if x == 0 else s - 1
P[s][UP] = [(1.0, ns_up, reward, is_done(ns_up))]
P[s][RIGHT] = [(1.0, ns_right, reward, is_done(ns_right))]
P[s][DOWN] = [(1.0, ns_down, reward, is_done(ns_down))]
P[s][LEFT] = [(1.0, ns_left, reward, is_done(ns_left))]
it.iternext()
# Initial state distribution is uniform
isd = np.ones(nS) / nS
# We expose the model of the environment for educational purposes
# This should not be used in any model-free learning algorithm
self.P = P
super(GridworldEnv, self).__init__(nS, nA, P, isd)
def _render(self, mode='human', close=False):
""" Renders the current gridworld layout
For example, a 4x4 grid with the mode="human" looks like:
T o o o
o x o o
o o o o
o o o T
where x is your position and T are the two terminal states.
"""
if close:
return
outfile = io.StringIO() if mode == 'ansi' else sys.stdout
grid = np.arange(self.nS).reshape(self.shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
if self.s == s:
output = " x "
elif s == 0 or s == self.nS - 1:
output = " T "
else:
output = " o "
if x == 0:
output = output.lstrip()
if x == self.shape[1] - 1:
output = output.rstrip()
outfile.write(output)
if x == self.shape[1] - 1:
outfile.write("\n")
it.iternext()
pp = pprint.PrettyPrinter(indent=2)
```
**An Introduction to Exploration: Epsilon-Greedy Exploration**
Exploration is a key concept in RL. In order to find the best policies, an agent needs to explore the environment. By exploring, the agent can experience new states and rewards. In the TD learning notebook, the agent explored GridWorld by taking a random action at every step. While random action explorations can work in some environments, the downside is the agent can spend too much time exploring bad states or states that have already been explored fully and not enough time exploring promising states. A simple--yet surprisingly effective--approach to exploration is Epsilon-Greedy exploration. A epsilon percentage of the time, the agent chooses a random action. The remaining amount of the time (1-epsilon) the agent choose the best estimated action aka the *greedy action*. Epsilon can be a fixed value between 0 and 1 or can start at a high value and gradually decay over time (ie start at .99 and decay to 0.01). In this notebook we will used a fixed epsilon value of 0.1. Below is a simple example of epsilon-greedy exploration.
```
#declare the environment
env = GridworldEnv()
#reset the environment and get the agent's current position (observation)
current_state = env.reset()
env._render()
print("")
action_dict = {0:"UP",1:"RIGHT", 2:"DOWN",3:"LEFT"}
greedy_dict = {0:3,1:3,2:3,3:3,
4:0,5:0,6:0,7:0,
8:2,9:2,10:2,11:2,
12:1,13:1,14:1,15:1}
epsilon = 0.1
for i in range(10):
#choose random action epsilon amount of the time
if np.random.rand() < epsilon:
action = env.action_space.sample()
action_type = "random"
else:
#Choose a greedy action. We will learn greedy actions with Q learning in the following cells.
action = greedy_dict[current_state]
action_type = "greedy"
current_state,reward,done,info = env.step(action)
print("Agent took {} action {} and is now in state {} ".format(action_type, action_dict[action], current_state))
env._render()
print("")
if done:
print("Agent reached end of episode, resetting the env")
print(env.reset())
print("")
env._render()
print("")
```
**The RL Training Loop**
In the next cell we are going to define the training loop and then run it in the following cell. The goal is to estimate the Q value of each state (the value of each state-action combination) using Q learning. q_value_array holds the estimated values. After each step the agent takes in the env, we update the q_value_array with the Q learning formula.
```
def q_learning_q_value_estimate(env,episodes=1000,alpha=0.05,discount_factor=1.0,epsilon=0.1):
state_size = env.nS
action_size = env.nA
#initialize the estimated state values to zero
q_value_array = np.zeros((state_size, action_size))
#reset the env
current_state = env.reset()
#env._render()
#run through each episode taking a random action each time
#upgrade estimated state value after each action
current_episode = 0
while current_episode < episodes:
#choose action based on epsilon-greedy policy
if np.random.rand() < epsilon:
eg_action = env.action_space.sample()
else:
#Choose a greedy action.
eg_action = np.argmax(q_value_array[current_state])
#take a step using epsilon-greedy action
next_state, rew, done, info = env.step(eg_action)
#Update Q values using Q learning update method
max_q_value = np.max(q_value_array[next_state])
q_value_array[current_state,eg_action] = q_value_array[current_state,eg_action] + \
alpha * (rew + discount_factor*max_q_value - q_value_array[current_state,eg_action])
#if the epsiode is done, reset the env, if not the next state becomes the current state and the loop repeats
if done:
current_state = env.reset()
current_episode += 1
else:
current_state = next_state
return q_value_array
#run episodes with Q learning and get the state value estimates
q_values = q_learning_q_value_estimate(env,episodes=10000,alpha=0.01)
print("All Q Value Estimates:")
print(np.round(q_values.reshape((16,4)),1))
print("each row is a state, each column is an action")
print("")
greedy_q_value_estimates = np.max(q_values,axis=1)
print("Greedy Q Value Estimates:")
print(np.round(greedy_q_value_estimates.reshape(env.shape),1))
print("estimate of the optimal State value at each state")
print("")
```
The first output shows the estimated value for each action in each state. Ie row 4 column 4 is the value if the agent was in the upper right grid cell and took that action left. In the second output, we take the best action for each of the 16 states and show the agent's estimate of the state value assuming the agent always acts greedily.
```
```
| github_jupyter |
This notebook aims to look at the expression pattern of the different secretome expression clusters shown in Figure 4 of the paper. It aims to determine the following.
* How many genes of cluster X are alleles, non-allelic inter-haplome paralogs (non-allelic protein 'orthoglos'), and how many are singletons
* How many alleles of cluster X in haplotype Y are expressed in cluster Z in halpotype Y and vice versa
The input data is the secretome expression cluster data provided by Jana Speerschneider and the allele analysis done by Benjamin Schwessinger
This notebook was only designed for the purpose of analyzing the Pst-104E genome. No gurantees it works in any other situtation. It will have spelling errors due to the lack of autocorrection.
```
%matplotlib inline
import pandas as pd
import os
import re
from Bio import SeqIO
from Bio import SeqUtils
import pysam
from Bio.SeqRecord import SeqRecord
from pybedtools import BedTool
import numpy as np
import pybedtools
import time
import matplotlib.pyplot as plt
import sys
import subprocess
import shutil
from Bio.Seq import Seq
import pysam
from Bio import SearchIO
import json
import glob
import scipy.stats as stats
import statsmodels as sms
import statsmodels.sandbox.stats.multicomp
import distance
import seaborn as sns
#Define some PATH
BASE_AA_PATH = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_104E_v12'
POST_ALLELE_ANALYSIS_PATH = os.path.join(BASE_AA_PATH, 'post_allele_analysis', \
'proteinortho_graph516_QC_Qcov80_PctID70_evalue01')
OUT_PATH = os.path.join(POST_ALLELE_ANALYSIS_PATH , \
'secretome_expression_clusters')
CLUSTER_PATH_P = os.path.join(BASE_AA_PATH, 'Pst_104E_genome',\
'gene_expression', 'Pst104_p_SecretomeClustering' )
CLUSTER_PATH_H = os.path.join(BASE_AA_PATH, 'Pst_104E_genome',\
'gene_expression', 'Pst104_h_SecretomeClustering' )
#some list to order the output later on
haplotig_cluster_order = ['Cluster9', 'Cluster10', 'Cluster11', 'Cluster12', 'Cluster13', 'Cluster14',\
'Cluster15', 'Cluster16']
primary_cluster_order = ['Cluster1', 'Cluster2', 'Cluster3', 'Cluster4', 'Cluster5', 'Cluster6',\
'Cluster7', 'Cluster8']
#get the different classes of genes e.g. alleles, non-allelic protein 'orthoglos', \
#loose_singletons (still including unphased genes), singletons
allele_fn = 'Pst_104E_v12_p_ctg.all.alleles'
loose_singletons_fn = 'Pst_104E_v12_ph_ctg.loose_singletons'
singletons_fn = 'Pst_104E_v12_ph_ctg.singletons'
nap_fn = 'Pst_104E_v12_ph_ctg.no_alleles_orthologs'
alleles_df = pd.read_csv(os.path.join(POST_ALLELE_ANALYSIS_PATH, allele_fn), header=None,\
sep='\t', names=['p_genes', 'h_genes'])
loose_sing_array = pd.read_csv(os.path.join(POST_ALLELE_ANALYSIS_PATH, loose_singletons_fn),
header=None, sep='\t')[0]
sing_array = pd.read_csv(os.path.join(POST_ALLELE_ANALYSIS_PATH, singletons_fn),
header=None, sep='\t')[0]
nap_array = pd.read_csv(os.path.join(POST_ALLELE_ANALYSIS_PATH, nap_fn),
header=None, sep='\t')[0]
#now get the different gene clusters in a df with the following set up
#columns = gene, cluster, allele status, allele_ID
primary_df = pd.DataFrame(columns=['gene', 'cluster_ID', 'allele_state', 'allele_ID'])
haplotig_df = pd.DataFrame(columns=['gene', 'cluster_ID', 'allele_state', 'allele_ID'])
#get the genes and the cluster ID as fn in list of equal lenght to be used as gene and
#cluster_ID columns
_gene_list = []
_cluster_list = []
for file in [x for x in os.listdir(CLUSTER_PATH_P) if x.endswith('_DEs.fasta')]:
for seq in SeqIO.parse(open(os.path.join(CLUSTER_PATH_P,file), 'r'), 'fasta'):
_gene_list.append(seq.id)
_cluster_list.append(file.split('_')[0])
primary_df.gene = _gene_list
primary_df.cluster_ID = _cluster_list
#now populate the allele_state list by setting the value in the allele_state column
#nomenclatures are alleles, nap, loose_singletons (unphased singletons), singletons (True singletons)
primary_df.loc[\
primary_df[primary_df.gene.isin(alleles_df.p_genes)].index,\
'allele_state'] = "allelic"
primary_df.loc[\
primary_df[primary_df.gene.isin(sing_array)].index,\
'allele_state'] = 'singleton'
primary_df.loc[\
primary_df[primary_df.gene.isin(nap_array)].index,\
'allele_state'] = 'nap'
#now do the same thing for the haplotig sequences
#get the genes and the cluster ID as fn in list of equal lenght to be used as gene and
#cluster_ID columns
_gene_list = []
_cluster_list = []
for file in [x for x in os.listdir(CLUSTER_PATH_H) if x.endswith('_DEs.fasta')]:
for seq in SeqIO.parse(open(os.path.join(CLUSTER_PATH_H,file), 'r'), 'fasta'):
_gene_list.append(seq.id)
_cluster_list.append(file.split('_')[0])
haplotig_df.gene = _gene_list
haplotig_df.cluster_ID = _cluster_list
haplotig_df.loc[\
haplotig_df[haplotig_df.gene.isin(alleles_df.h_genes)].index,\
'allele_state'] = "allelic"
haplotig_df.loc[\
haplotig_df[haplotig_df.gene.isin(loose_sing_array)].index,\
'allele_state'] = 'singleton'
haplotig_df.loc[\
haplotig_df[haplotig_df.gene.isin(nap_array)].index,\
'allele_state'] = 'nap'
#now summarize the allele states and write them out to file
#first aggregateon cluster_ID and allele_state + unstack
primary_allele_state_df = primary_df.loc[:,['gene','cluster_ID','allele_state']]\
.pivot_table(columns=['cluster_ID','allele_state'],aggfunc='count').unstack()
#drop the unneccessary gene level from the index and replace na with 0
primary_allele_state_df.index = primary_allele_state_df.index.droplevel()
primary_allele_state_df.fillna(0)
#add a total number as well
primary_allele_state_df['Total'] = primary_allele_state_df.sum(axis=1)
#save dataframe
out_fn = 'Pst_104E_v12_p_ctg.cluster_status_summary.df'
primary_allele_state_df.fillna(0).T.loc[:,\
['Cluster1', 'Cluster2', 'Cluster3', 'Cluster4', 'Cluster5', 'Cluster6',\
'Cluster7', 'Cluster8']].to_csv(os.path.join(OUT_PATH, out_fn), sep='\t')
#now summarize the allele states and write them out to file
#first aggregateon cluster_ID and allele_state + unstack
haplotig_allele_state_df = haplotig_df.loc[:,['gene','cluster_ID','allele_state']]\
.pivot_table(columns=['cluster_ID','allele_state'],aggfunc='count').unstack()
#drop the unneccessary gene level from the index and replace na with 0
haplotig_allele_state_df.index = haplotig_allele_state_df.index.droplevel()
haplotig_allele_state_df.fillna(0)
#add a total number as well
haplotig_allele_state_df['Total'] = haplotig_allele_state_df.sum(axis=1)
#save dataframe
out_fn = 'Pst_104E_v12_h_ctg.cluster_status_summary.df'
haplotig_allele_state_df.fillna(0).T.loc[:,\
['Cluster9', 'Cluster10', 'Cluster11', 'Cluster12', 'Cluster13', 'Cluster14',\
'Cluster15', 'Cluster16']].to_csv(os.path.join(OUT_PATH, out_fn), sep='\t')
#get the allele for each gene using a dict approach that also takes care of potential multiple
# alleles
allele_single_dict = {}
allele_multiple_dict = {}
#take all the allelic genes and pick the corresponding allele form the allele_df
#if there are multiple possible allele pairings add those as list to a different dictionary
for gene in primary_df[primary_df.allele_state == 'allelic'].gene:
if len(alleles_df[alleles_df.p_genes == gene].h_genes.tolist()) == 1:
allele_single_dict[gene] = alleles_df[alleles_df.p_genes == gene].h_genes.tolist()[0]
elif len(alleles_df[alleles_df.p_genes == gene].h_genes.tolist()) != 1:
print(len(alleles_df[alleles_df.p_genes == gene].h_genes.tolist()))
allele_multiple_dict[gene] = alleles_df[alleles_df.p_genes == gene].h_genes.tolist()
for gene in haplotig_df[haplotig_df.allele_state == 'allelic'].gene:
if len(alleles_df[alleles_df.h_genes == gene].p_genes.tolist()) == 1:
allele_single_dict[gene] = alleles_df[alleles_df.h_genes == gene].p_genes.tolist()[0]
elif len(alleles_df[alleles_df.h_genes == gene].p_genes.tolist()) != 1:
print(len(alleles_df[alleles_df.h_genes == gene].p_genes.tolist()))
allele_multiple_dict[gene] = alleles_df[alleles_df.h_genes == gene].p_genes.tolist()
#frist add the single allele pairing to the dataframes
def add_single_alleles(x, _dict1=allele_single_dict,_dict2=allele_multiple_dict):
if x in _dict1.keys():
return _dict1[x]
elif x in _dict2:
return 'multiples'
primary_df.allele_ID = primary_df.gene.apply(add_single_alleles)
haplotig_df.allele_ID = haplotig_df.gene.apply(add_single_alleles)
#now take care of the genes that have multiple alleles. In our case the biggest possible number
#is two AND all are two so this hack
#make two copies of the df that are multiples
tmp0_df = primary_df[primary_df.allele_ID == 'multiples'].copy()
tmp1_df = primary_df[primary_df.allele_ID == 'multiples'].copy()
drop_index = primary_df[primary_df.allele_ID == 'multiples'].index
#add the genes ideas to each of the copies once taking the first element and the other time
#the second
tmp0_df.allele_ID = tmp0_df.gene.apply(lambda x: allele_multiple_dict[x][0])
tmp1_df.allele_ID = tmp1_df.gene.apply(lambda x: allele_multiple_dict[x][1])
#now concat both tmp dataframes to the original dataframe while not including them in the
#former
primary_wa_df = pd.concat([primary_df.drop(primary_df.index[drop_index]), tmp0_df, tmp1_df], axis = 0)
primary_wa_df.reset_index(drop=True, inplace=True)
#now take care of the genes that have multiple alleles. In our case the biggest possible number
#is two AND all are two so this hack
#make two copies of the df that are multiples
tmp0_df = haplotig_df[haplotig_df.allele_ID == 'multiples'].copy()
tmp1_df = haplotig_df[haplotig_df.allele_ID == 'multiples'].copy()
drop_index = haplotig_df[haplotig_df.allele_ID == 'multiples'].index
#add the genes ideas to each of the copies once taking the first element and the other time
#the second
tmp0_df.allele_ID = tmp0_df.gene.apply(lambda x: allele_multiple_dict[x][0])
tmp1_df.allele_ID = tmp1_df.gene.apply(lambda x: allele_multiple_dict[x][1])
#now concat both tmp dataframes to the original dataframe while not including them in the
#former
haplotig_wa_df = pd.concat([haplotig_df.drop(haplotig_df.index[drop_index]), tmp0_df, tmp1_df], axis = 0)
haplotig_wa_df.reset_index(drop=True, inplace=True)
#now summaries the respective cluster hits for primary contigs
count_list = []
percentage_list = []
for cluster in primary_df.cluster_ID.unique():
c_genes = ''
#subset the dataframe to get the allelic genes in each cluster
c_genes = primary_df[(primary_df.cluster_ID == cluster) \
& (primary_df.allele_state == 'allelic')].gene
#use this list to subset the other dataframe
_tmp_df = haplotig_wa_df[haplotig_wa_df.allele_ID.isin(c_genes)]
_tmp_df.rename(columns={'gene': cluster}, inplace=True)
#count occurances and add them to the list to make a dataframe alter
count_list.append(_tmp_df.groupby('cluster_ID').count()[cluster])
#now take care of percentage by making a count dataframe
_tmp_count_df = _tmp_df.groupby('cluster_ID').count().copy()
#and dividing series by the clusters total
_tmp_count_df[cluster] = _tmp_count_df[cluster].\
apply(lambda x: x/primary_allele_state_df.loc[cluster, "allelic"]*100)
percentage_list.append(_tmp_count_df[cluster])
#now generate some summary df by concaonating the list and adding a Total line at
c_out_fn = 'Pst_104E_v12_p_ctg.relatvie_cluster_allele_status_count_summary.df'
count_df = pd.concat(count_list, axis=1)
count_df.loc['Total',:]= count_df.sum(axis=0)
count_df.fillna(0, inplace=True)
count_df.astype(int).loc[haplotig_cluster_order+["Total"], primary_cluster_order]\
.to_csv(os.path.join(OUT_PATH, c_out_fn), sep='\t')
p_out_fn = 'Pst_104E_v12_p_ctg.relatvie_cluster_allele_status_per_summary.df'
percentage_df = pd.concat(percentage_list, axis=1)
percentage_df.loc['Total',:]= percentage_df.sum(axis=0)
percentage_df.fillna(0, inplace=True)
percentage_df.round(1).loc[haplotig_cluster_order+["Total"], primary_cluster_order]\
.to_csv(os.path.join(OUT_PATH, p_out_fn), sep='\t')
#now summaries the respective cluster hits for haplotigs
count_list = []
percentage_list = []
for cluster in haplotig_df.cluster_ID.unique():
c_genes = ''
#subset the dataframe to get the allelic genes in each cluster
c_genes = haplotig_df[(haplotig_df.cluster_ID == cluster) \
& (haplotig_df.allele_state == 'allelic')].gene
#use this list to subset the other dataframe
_tmp_df = primary_wa_df[primary_wa_df.allele_ID.isin(c_genes)]
_tmp_df.rename(columns={'gene': cluster}, inplace=True)
#count occurances and add them to the list to make a dataframe alter
count_list.append(_tmp_df.groupby('cluster_ID').count()[cluster])
#now take care of percentage by making a count dataframe
_tmp_count_df = _tmp_df.groupby('cluster_ID').count().copy()
#and dividing series by the clusters total
_tmp_count_df[cluster] = _tmp_count_df[cluster].\
apply(lambda x: x/haplotig_allele_state_df.loc[cluster, "allelic"]*100)
percentage_list.append(_tmp_count_df[cluster])
#now generate some summary df by concaonating the list and adding a Total line at
c_out_fn = 'Pst_104E_v12_h_ctg.relatvie_cluster_allele_status_count_summary.df'
count_df = pd.concat(count_list, axis=1)
count_df.loc['Total',:]= count_df.sum(axis=0)
count_df.fillna(0, inplace=True)
count_df.astype(int).loc[primary_cluster_order+["Total"], haplotig_cluster_order]\
.to_csv(os.path.join(OUT_PATH, c_out_fn), sep='\t')
p_out_fn = 'Pst_104E_v12_h_ctg.relatvie_cluster_allele_status_per_summary.df'
percentage_df = pd.concat(percentage_list, axis=1)
percentage_df.loc['Total',:]= percentage_df.sum(axis=0)
percentage_df.fillna(0, inplace=True)
percentage_df.round(1).loc[primary_cluster_order+["Total"], haplotig_cluster_order]\
.to_csv(os.path.join(OUT_PATH, p_out_fn), sep='\t')
#at the end fix up the allele summary dataframe for primary allele state analysis
#at this point we count the non-phased singletons to the alleles as well in the primary
#but leave them out initially for the relative analysis
reset_index = primary_df[(primary_df.allele_state != 'allelic')&(primary_df.allele_state != 'nap')\
&(primary_df.allele_state != 'singleton')].index
primary_df.loc[reset_index, 'allele_state'] = 'allelic'
#save dataframe
#now summarize the allele states and write them out to file
#first aggregateon cluster_ID and allele_state + unstack
primary_allele_state_df = primary_df.loc[:,['gene','cluster_ID','allele_state']]\
.pivot_table(columns=['cluster_ID','allele_state'],aggfunc='count').unstack()
#drop the unneccessary gene level from the index and replace na with 0
primary_allele_state_df.index = primary_allele_state_df.index.droplevel()
primary_allele_state_df.fillna(0)
#add a total number as well
primary_allele_state_df['Total'] = primary_allele_state_df.sum(axis=1)
out_fn = 'Pst_104E_v12_p_ctg.cluster_status_summary.df'
primary_allele_state_df.fillna(0).T.loc[:,\
['Cluster1', 'Cluster2', 'Cluster3', 'Cluster4', 'Cluster5', 'Cluster6',\
'Cluster7', 'Cluster8']].to_csv(os.path.join(OUT_PATH, out_fn), sep='\t')
```
| github_jupyter |
# <center>Data 515 Homework 1
## Instructions
Using the counts of bicycle crossings of the Fremont Birdge since 2012 (found [here](https://data.seattle.gov/Transportation/Fremont-Bridge-Hourly-Bicycle-Counts-by-Month-Octo/65db-xm6k)) perform the following tasks:
1. Read the CSV file into a pandas dataframe. (1 pt)
2. Add columns to the dataframe containing: (3 pt)
i. The total (East + West) bicycle count
ii. The hour of the day
iii. The year
3. Create a dataframe with the subset of data from the year 2016 (1 pt)
4. Use pandas + matplotlib to plot the counts by hour. (i.e. hour of the day on the x-axis, total daily counts on the y-axis) (1 pt)
5. Use pandas to determine what is (on average) the busiest hour of the day (1 pt)
For more information on the assignment tasks, see assignment sheet [here](https://github.com/UWSEDS/manipulating-data-in-python-hmurph3).
For more information about Seattle's Open Data Program (which makes data generated by the City of Seattle availible to the public) see [here](http://www.seattle.gov/tech/initiatives/open-data).
## Import libraries to perform tasks
```
# for Dataframe analysis
import pandas as pd
# for plotting
import matplotlib.pyplot as plt
```
## 1. Read the CSV file into pandas dataframe
```
# read the data into a pandas dataframe
df = pd.read_csv('https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD', sep = ",", header = 0)
# view the first 5 rows of the dataframe
df.head()
```
## 2. Add columns to the dataframe containing:
i. The total (East + West) bicycle count
ii. The hour of the day
iii. The year
```
df['Total'] = df['Fremont Bridge East Sidewalk'] + df['Fremont Bridge West Sidewalk']
df.head()
# verify the format of the entries in the Date column are strings in order to use the datetime function in pandas
type(df.Date[1])
# Convert Date column to datetime format
df['Date'] = pd.to_datetime(df['Date'], format ='%m/%d/%Y %I:%M:%S %p')
# Create a new column for the Hour of the Day from the Date column
df['Hour of Day'] = pd.DatetimeIndex(df['Date']).hour
# Create a new column for the year from the Date column
df['Year'] = pd.DatetimeIndex(df['Date']).year
# Check to see that the additional columns were created correctly
df.head()
```
## 3. Create a dataframe with the subset of data from the year 2016
```
# create a subset of the total dataset that only contains information from 2016
subset = df.loc[df['Year'] == 2016]
# check the output of the subsetting
subset.head()
```
## 4. Use pandas + matplotlib to plot the counts by hour. (i.e. hour of the day on the x-axis, total daily counts on the y-axis)
```
# find the total crossing by hour of they day
hourly_counts = df.groupby('Hour of Day',as_index = False)[['Total']].sum()
# create a bar plot of the total crossing by hour of the day
plt.bar(hourly_counts["Hour of Day"], hourly_counts["Total"], align='center')
plt.ylabel('Total Count of Crossing')
plt.xlabel('Hour of Day')
plt.title('Hourly Crossing on Fremont Bridge')
plt.show()
```
## 5. Use pandas to determine what is (on average) the busiest hour of the day
```
# find the average hourly count by hour of the bridge crossings
avg_hourly_counts = df.groupby('Hour of Day',as_index = False)[['Total']].mean()
# Return the hour that has the maximumn average crossing.
avg_hourly_counts['Total'].idxmax()
```
| github_jupyter |
# DC Resistivity Forward Modeling: Sounding Data over 2D Earth
In this notebook, we predict data for a Schlumberger sounding survey over 2D Earth. We need to account for horizontal variations in electrical resistivity by modeling the physics in 2.5. By shifting the center location of the sounding survey, we can predict the sounding data at different locations along the geological cross-section shown below.
<img style="float: center; width: 800px" src="https://github.com/simpeg-research/gwb-dc-inversions/blob/master/figures/geologic_cross_section.PNG?raw=true">
## Import Modules
```
from discretize import TensorMesh
from discretize.utils import mkvc, refine_tree_xyz
import pandas as pd
from SimPEG.utils import ModelBuilder, surface2ind_topo
from SimPEG import maps, data
from SimPEG.electromagnetics.static import resistivity as dc
from SimPEG.electromagnetics.static import induced_polarization as ip
from SimPEG.electromagnetics.static.utils import (
generate_dcip_survey_line, plot_pseudoSection, gettopoCC,
source_receiver_midpoints, geometric_factor
)
import os
import numpy as np
from scipy.interpolate import interp1d
import matplotlib as mpl
import matplotlib.pyplot as plt
try:
from pymatsolver import Pardiso as Solver
except ImportError:
from SimPEG import SolverLU as Solver
mpl.rcParams.update({'font.size': 14})
```
## User Defined Parameters for the Notebook
Here, the user defines the parameters required to run the notebook. The parameters are as follows:
**Parameters for the survey**
- **half_AB_separation:** a numpy array containing the AB/2 values for the source electrodes
- **half_MN_separations:** a numpy array containing the MN/2 values for the receiver electrodes
- **center_location:** center location for the sounding survey
**Parameters for layer resistivities**
- **alluvium_resistivity:** resistivity for the alluvial layer (range?)
- **sand_aquifer_resistivity:** resistivity for the near-surface sand aquifer (range?)
- **clay_resistivity:** resistivity for the clay/laterite layer (range?)
- **bedrock_resistivity:** resistivity for the bedrock layer (range?)
- **bedrock_aquifer_resistivity:** resistivity for the fractured bedrock aquifer (range?)
```
# SURVEY PARAMETERS
# Make a numpy array for AB/2 values
half_AB_separation = np.r_[
6,12,18,24,30,
36,42,48,54,60,
66,72,78,84,90,
96,102,108,114,120,
126,132,138,142,146,
150,154,158,162,166
] # AB/2 values
# Make a numpy array for MN/2 values
half_MN_separation = np.r_[
2,4,6,8,10,
12,14,16,18,20,
22,24,26,28,30,
32,34,36,38,40,
42,44,46,48,50,
52,54,56,58,60
] # MN/2 values
# Center location for the sounding survey
center_location = 50
# LAYER RESISTIVITIES (Ohm-meters)
alluvium_resistivity = 150.
sand_aquifer_resistivity = 20.
clay_resistivity = 250
bedrock_resistivity = 500.
bedrock_aquifer_resistivity = 20.
# Output file name
filename = 'sounding_data/Mon_Geology_50_East.csv'
writeFile = False
noise = 1 # percent noise
```
## Define the Survey
This portion of the notebook defines a Schlumberger sounding survey using the AB/2 values, MN/2 values and center location provided.
```
source_list = [] # create empty array for sources to live
for ii in range(0, len(half_AB_separation)):
# AB electrode locations for source. Each is a (1, 3) numpy array
A_location = np.r_[center_location-half_AB_separation[ii], 0.]
B_location = np.r_[center_location+half_AB_separation[ii], 0.]
# MN electrode locations for receivers. Each is an (N, 3) numpy array
M_location = np.r_[center_location-half_MN_separation[ii], 0.]
N_location = np.r_[center_location+half_MN_separation[ii], 0.]
# Create receivers list. Define as pole or dipole.
receiver_list = dc.receivers.Dipole_ky(
M_location, N_location #, data_type='apparent_resistivity'
)
receiver_list = [receiver_list]
# Define the source properties and associated receivers
source_list.append(
dc.sources.Dipole(receiver_list, A_location, B_location)
)
# Define survey
survey = dc.Survey_ky(source_list)
```
## Define a 2D Mesh
This part of the notebook creates a numerical grid (or mesh) on which we simulate the sounding data.
```
# Define a layered Earth
hx = np.logspace(-1,3,101)
hx = np.r_[hx[::-1], hx]
hy = np.logspace(-0.5, 3, 101)[::-1]
mesh = TensorMesh([hx, hy], 'CN')
mesh.x0 = mesh.x0 + [center_location, 0.0]
```
Define and Plot Resistivity Model
-----------------------------------------------
This part of the notebook defines the resistivity model on the mesh. There are three notable
```
depth_to_clay = -20.
depth_to_bedrock = -30.
# Create layered model with background resistivities
resistivity_model = ModelBuilder.layeredModel(
mesh.gridCC, np.r_[0., depth_to_clay, depth_to_bedrock],
np.r_[alluvium_resistivity, clay_resistivity, bedrock_resistivity]
)
# Add the sand aquifer
ind = ModelBuilder.getIndicesBlock([-np.inf, -8.], [-8, -16], mesh.gridCC)
resistivity_model[ind] = sand_aquifer_resistivity
# Add the bedrock aquifer
ind = ModelBuilder.getIndicesBlock([-10, -35.], [np.inf, -45], mesh.gridCC)
resistivity_model[ind] = bedrock_aquifer_resistivity
# Define a mapping from the model to the mesh
model_map = maps.IdentityMap(mesh)
# Plot the resistivity model
fig, ax = plt.subplots(1, 1, figsize=(12, 5))
out = mesh.plotImage(
np.log10(resistivity_model), ax=ax,
range_x=[-500+center_location, 500+center_location],
range_y=[-100, 0.5], pcolorOpts={'cmap':'jet'}
)
ax.set_title('Resistivity model and AB electrode locations')
# Add survey geometry
survey.getABMN_locations()
AB_locations = np.r_[
survey.a_locations, survey.b_locations,
]
ax.plot(AB_locations[:, 0], AB_locations[:, 1], 'k^', markersize=5)
ax.set_xlabel('X (m)')
ax.set_ylabel('Y (m)')
# # Add colorbar
norm = mpl.colors.Normalize(
vmin=np.floor(np.log10(np.min(resistivity_model))), vmax=np.ceil(np.log10(np.max(resistivity_model)))
)
cbar = plt.colorbar(out[0], norm=norm, format="$10^{%.1f}$")
cbar.set_label('Resistivity ($\Omega$/m)')
```
## Run the Simulation
In this part of the notebook, all the pieces needed to predict the data are assembled into a *simulation*. Once created, we can predict data for a given *resistivity model*. We have chosen to predict the data as voltages. Once the data are predicted, we convert the values to apparent resistivities and plot the Schlumberger sounding curve.
```
simulation = dc.simulation_2d.Problem2D_N(
mesh, survey=survey, rhoMap=model_map, Solver=Solver
)
# Predict the data by running the simulation.
dpred = simulation.dpred(resistivity_model)
# Convert voltages to apparent resistivities
dpred = dpred/geometric_factor(survey)
```
## Plot the Sounding Curve
Here, we plot the apparent resistivities as a function of AB/2 values. This produces a sounding curve which can be used to understand how the resistivity changes with respect to depth.
```
# Plot apparent resistivities on sounding curve
fig, ax = plt.subplots(1, 1, figsize=(11, 5))
ax.loglog(half_AB_separation, dpred, 'bx', lw=2, ms=10, mew=2)
ax.set_xlabel("AB/2 (m)")
ax.set_ylabel("Apparent Resistivity ($\Omega m$)")
ax.grid(True, which="both", ls="--", c='gray')
# Save the data to a CSV file
out_dat = np.c_[half_AB_separation, half_MN_separation, dpred]
columns = ['AB/2 (m)','MN/2 (m)','App. Res. (Ohm m)']
df = pd.DataFrame(out_dat, columns=columns)
df.to_csv(filename, index=False)
```
| github_jupyter |
# Super Fun with Bayesian Magic
> Let's reverse engineer a superannuation fund, because I'm bored in lockdown.
- toc: true
- badges: true
- comments: true
- categories: [Bayesian, Finance]
- image: images/2020-3-22-Super-Fun-With-Bayesian-Magic/australian-money-money-note-notes-529875.jpg
In Australia, there have been a number of radical fiscal proposals to fight the economic impacts of COV19.
One of these is to allow "Casual" workers (Those without paid leave entitlements) to withdraw money from their Superannuation account. It's a radical proposal, and one of my friends wanted to understand more about the impacts that this would have.
Their exact question was:
> "I want to know what you would estimate the balance of young people to be (Under the age of 34)"
> Note: Superannuation is a type of pension savings account in Australia, where 9.5% (or more) of an employees gross salary is contributed by an employer to an employees account. The account is usually only avalible to the employee when they retire at age 65. While employees have an individual fund, investment decisions are made by a fund manager. With $\$1.5 Trillion USD of total assets, and 15M members, the scheme has an average balance of $\$100,000 USD per member.
Let's do some analysis, to try and understand more about what the balance of a typical member, in different age and gender brackets looks like.
I'm focussing on a Superannuation fund called *Hostplus* who describe themselves as follows:
> "Hostplus is the industry fund for those that live and love Australian hospitality, tourism, recreation and sport".
*Hostplus* is a good fund to analyze, given (65%)[https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp/rp1718/CasualEmployeesAustralia#_Toc504135061] of employees in the *Accommodation and food services* sector are casual employees.
So the great news is that the data we need is provided by an Australian goverment body called APRA, and can be found [here](https://www.apra.gov.au/sites/default/files/Annual%20Fund-level%20Superannuation%20Statistics%20June%202019.xlsx)
In particualr, we are interested in the demographic data which can be found on tab 12.
Ok, so I've taken the data, and filtered out some of the blank rows, and columns which are redundant.
```
import pandas as pd
df = pd.read_csv('data/22-3-2020-Super-Fun-With-Bayesian-Magic/superannuation_demographic_data.csv',
sep=';',header=2,skiprows=[3],na_values='*',thousands='\xa0')
display(df)
hostplus_row = df[df['Fund name'] == 'HOSTPLUS Superannuation Fund']
display(hostplus_row)
```
Ok, Let's pull the data we need out of the Pandas dataframe. Please forgive me for this horrible horrible hack.
```
age_brackets = ['<25','25 to 34', '35 to 44', '45 to 49', '50 to 54', '55 to 59', '60 to 64',
'65 to 69', '70 to 74', '75 to 84', '85+']
members_f_numbers = hostplus_row[['<25','25 to 34', '35 to 44', '45 to 49', '50 to 54', '55 to 59', '60 to 64',
'65 to 69', '70 to 74', '75 to 84', '85+']].values.ravel()
members_m_numbers = hostplus_row[['<25.2', '25 to 34.2', '35 to 44.2','45 to 49.2', '50 to 54.2', '55 to 59.2',
'60 to 64.2', '65 to 69.2','70 to 74.2', '75 to 84.2', '85+.2']].values.ravel()
#Total account balance is denominated in 1000's of dollars
members_f_total_account_balance = 1000*hostplus_row[['<25.1', '25 to 34.1', '35 to 44.1', '45 to 49.1', '50 to 54.1',
'55 to 59.1','60 to 64.1', '65 to 69.1', '70 to 74.1', '75 to 84.1', '85+.1']].values.ravel()
members_m_total_account_balance = 1000*hostplus_row[['<25.3', '25 to 34.3', '35 to 44.3', '45 to 49.3', '50 to 54.3',
'55 to 59.3', '60 to 64.3', '65 to 69.3', '70 to 74.3', '75 to 84.3', '85+.3',]].values.ravel()
```
Now we have the data, let's do a quick, visualization of ages of HOSTPLUS's members.
One subtle point to call out, is that the age brackets are not uniform.
```
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [20, 20]
plt.bar(age_brackets,members_f_numbers,label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets,members_m_numbers,label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('Histogram of HOSTPLUS Member Ages')
plt.ylabel('Number of members')
plt.xlabel('Age Bracket (Years)')
plt.show()
plt.bar(age_brackets,100*np.cumsum(members_f_numbers)/np.nansum(members_f_numbers),label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets,100*np.cumsum(members_m_numbers)/np.nansum(members_m_numbers),label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('Cumulative Histogram of HOSTPLUS Member Ages')
plt.ylabel('Number of members')
plt.xlabel('Age Bracket (Years)')
plt.show()
```
Ok, so from this, it's clear that a typical member is in the range of 25-34 years old, with slightly more women than men. Now let's look at the account balances.
```
members_f_average_account_balance = members_f_total_account_balance/members_f_numbers
members_m_average_account_balance = members_m_total_account_balance/members_m_numbers
plt.bar(age_brackets,members_f_average_account_balance,label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets,members_m_average_account_balance,label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('HOSTPLUS average account balance vs age and gender')
plt.ylabel('Average account balance ($AUD)')
plt.xlabel('Age Bracket (Years)')
ax = plt.gca()
ax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: "${:,}".format(int(x))))
plt.show()
```
> Note: There is a clear trend which it would be remiss to ignore. Men have significantly larger balances than women, over a large part of the age curve. I'm not a labour economist, so I don't have the right background to correctly apprortion the causes for this.
So I think a logical next question, is what is the gross income of members by age bracket and gender. This question is a little trickier to determine, given we can't directly observe it.
Let's step back for a minute, and think about how the superannuation system works. Over time, 9.5%, or more of an employees salary is taken, and invested into the fund. Over the long-run, the value of this invested capital increases. Fortunately, *Hostplus* has been very effective at investing it's funds. It's delivered an average compound average growth rate (CAGR) of [8.6%(https://newsroom.hostplus.com.au/hostplus-best-in-show-with-125-mysuper-return/) over the past 15 years.
Before we do anything fancy, lets keep things really simple. Let's assume the following:
* Members contribute exactly 9.5% of their salary to the fund
* Investment returns are zero
* Members earn a constant salary over a 10 year period.
Firstly, let's create bins of uniform width of 10 years each.
```
def create_uniform_bins(x):
new_array = np.array([x[0],x[1],x[2],0.5*(x[3]+x[4]),0.5*(x[5]+x[6]),0.5*(x[7]+x[8]),x[9]])
return(new_array)
members_f_average_account_balance_uniform = create_uniform_bins(members_f_average_account_balance)
members_m_average_account_balance_uniform = create_uniform_bins(members_m_average_account_balance)
age_brackets_uniform = ['15 to 24','25 to 34','35 to 44','45 to 54','55 to 64','65 to 74','75 to 84']
plt.bar(age_brackets_uniform,members_f_average_account_balance_uniform,label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets_uniform,members_m_average_account_balance_uniform,label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('HOSTPLUS average account balance vs age and gender (Unifrom bins)')
plt.ylabel('Average account balance ($AUD)')
plt.xlabel('Age Bracket (Years)')
ax = plt.gca()
ax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: "${:,}".format(int(x))))
plt.show()
```
Ok, so if we are assuming that the superannuation fund balance is just the integral/sum of all contributions, we can find the contributions by taking the derivative of the balance. This will give us the change in the balance from one time period to another. If we divide this by the length, of the time period (10 years), we can find the contribution made each year.
```
contribution_rate_f = np.diff(members_f_average_account_balance_uniform)/10.0
contribution_rate_m = np.diff(members_m_average_account_balance_uniform)/10.0
plt.bar(age_brackets_uniform[:-1],contribution_rate_f,label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets_uniform[:-1],contribution_rate_m,label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('HOSTPLUS average account contributions p/a vs age and gender')
plt.ylabel('Contribution p/a ($AUD)')
plt.xlabel('Age Bracket (Years)')
ax = plt.gca()
ax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: "${:,}".format(int(x))))
plt.show()
```
If we then divide this contribution by 9.5%, we can find the implied salary for each time period.
```
supperannuation_contribution_rate_percent = 9.5
salary_f = contribution_rate_f / (supperannuation_contribution_rate_percent/100.0)
salary_m = contribution_rate_m / (supperannuation_contribution_rate_percent/100.0)
plt.bar(age_brackets_uniform[:-1],salary_f,label='Female',alpha=0.5,width=0.95,color='#A60628')
plt.bar(age_brackets_uniform[:-1],salary_m,label='Male',alpha=0.5,width=0.95,color='#348ABD')
plt.grid()
plt.legend()
plt.title('HOSTPLUS average gross salary p/a vs age and gender')
plt.ylabel('Gross salary p/a ($AUD)')
plt.xlabel('Age Bracket (Years)')
ax = plt.gca()
ax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: "${:,}".format(int(x))))
plt.show()
```
This is incredible. For context, the average fulltime salary for Australian men is approximately $\$\$\$\$$85,000 and $\$\$\$\$$75,000 for women. It's also why I feel it's so important to try something simple and dumb, before trying something elegant and clever.
So, what are we missing here?
Well,
| github_jupyter |
# fastNLP中的 Vocabulary
## 构建 Vocabulary
```
from fastNLP import Vocabulary
vocab = Vocabulary()
vocab.add_word_lst(['复', '旦', '大', '学']) # 加入新的字
vocab.add_word('上海') # `上海`会作为一个整体
vocab.to_index('复') # 应该会为3
vocab.to_index('我') # 会输出1,Vocabulary中默认pad的index为0, unk(没有找到的词)的index为1
# 在构建target的Vocabulary时,词表中应该用不上pad和unk,可以通过以下的初始化
vocab = Vocabulary(unknown=None, padding=None)
vocab.add_word_lst(['positive', 'negative'])
vocab.to_index('positive')
```
### 没有设置 unk 的情况
```
vocab.to_index('neutral') # 会报错,因为没有unk这种情况
```
### 设置 unk 的情况
```
from fastNLP import Vocabulary
vocab = Vocabulary(unknown='<unk>', padding=None)
vocab.add_word_lst(['positive', 'negative'])
vocab.to_index('neutral'), vocab.to_word(vocab.to_index('neutral'))
vocab
from fastNLP import Vocabulary
from fastNLP import DataSet
dataset = DataSet({'chars': [
['今', '天', '天', '气', '很', '好', '。'],
['被', '这', '部', '电', '影', '浪', '费', '了', '两', '个', '小', '时', '。']
],
'target': ['neutral', 'negative']
})
vocab = Vocabulary()
vocab.from_dataset(dataset, field_name='chars')
vocab.index_dataset(dataset, field_name='chars')
target_vocab = Vocabulary(padding=None, unknown=None)
target_vocab.from_dataset(dataset, field_name='target')
target_vocab.index_dataset(dataset, field_name='target')
print(dataset)
from fastNLP import Vocabulary
from fastNLP import DataSet
tr_data = DataSet({'chars': [
['今', '天', '心', '情', '很', '好', '。'],
['被', '这', '部', '电', '影', '浪', '费', '了', '两', '个', '小', '时', '。']
],
'target': ['positive', 'negative']
})
dev_data = DataSet({'chars': [
['住', '宿', '条', '件', '还', '不', '错'],
['糟', '糕', '的', '天', '气', ',', '无', '法', '出', '行', '。']
],
'target': ['positive', 'negative']
})
vocab = Vocabulary()
# 将验证集或者测试集在建立词表是放入no_create_entry_dataset这个参数中。
vocab.from_dataset(tr_data, field_name='chars', no_create_entry_dataset=[dev_data])
import torch
from fastNLP.embeddings import StaticEmbedding
from fastNLP import Vocabulary
vocab = Vocabulary()
vocab.add_word('train')
vocab.add_word('only_in_train') # 仅在train出现,但肯定在预训练词表中不存在
vocab.add_word('test', no_create_entry=True) # 该词只在dev或test中出现
vocab.add_word('only_in_test', no_create_entry=True) # 这个词在预训练的词表中找不到
embed = StaticEmbedding(vocab, model_dir_or_name='en-glove-6b-50d')
print(embed(torch.LongTensor([vocab.to_index('train')])))
print(embed(torch.LongTensor([vocab.to_index('only_in_train')])))
print(embed(torch.LongTensor([vocab.to_index('test')])))
print(embed(torch.LongTensor([vocab.to_index('only_in_test')])))
print(embed(torch.LongTensor([vocab.unknown_idx])))
```
| github_jupyter |
Copyright (c) 2020-2021 Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# AutoVW: ChaCha for Online AutoML with Vowpal Wabbit
## 1. Introduction
In this notebook, we use one real data example (regression task) to showcase AutoVW, which is an online AutoML solution based on the following work:
*ChaCha for online AutoML. Qingyun Wu, Chi Wang, John Langford, Paul Mineiro and Marco Rossi. To appear in ICML 2021.*
AutoVW is implemented in FLAML. FLAML requires `Python>=3.6`. To run this notebook example, please install flaml with the `notebook` option:
```bash
pip install flaml[notebook]
```
```
!pip install flaml[notebook];
```
## 2. Online regression with AutoVW
### Load data from openml and preprocess
Download [NewFuelCar](https://www.openml.org/d/41506) from OpenML.
```
import openml
# did = 42183
did = 41506
ds = openml.datasets.get_dataset(did)
target_attribute = ds.default_target_attribute
data = ds.get_data(target=target_attribute, dataset_format='array')
X, y = data[0], data[1]
print(X.shape, y.shape)
```
Convert the openml dataset into vowpalwabbit examples:
Sequentially group features into up to 10 namespaces and convert the original data examples into vowpal wabbit format.
```
import numpy as np
import string
NS_LIST = list(string.ascii_lowercase) + list(string.ascii_uppercase)
max_ns_num = 10 # the maximum number of namespaces
orginal_dim = X.shape[1]
max_size_per_group = int(np.ceil(orginal_dim / float(max_ns_num)))
# sequential grouping
group_indexes = []
for i in range(max_ns_num):
indexes = [ind for ind in range(i * max_size_per_group,
min((i + 1) * max_size_per_group, orginal_dim))]
if len(indexes) > 0:
group_indexes.append(indexes)
vw_examples = []
for i in range(X.shape[0]):
ns_content = []
for zz in range(len(group_indexes)):
ns_features = ' '.join('{}:{:.6f}'.format(ind, X[i][ind]) for ind in group_indexes[zz])
ns_content.append(ns_features)
ns_line = '{} |{}'.format(str(y[i]), '|'.join('{} {}'.format(NS_LIST[j], ns_content[j]) for j in range(len(group_indexes))))
vw_examples.append(ns_line)
print('openml example:', y[0], X[0])
print('vw example:', vw_examples[0])
```
### Set up the online learning loop
```
from sklearn.metrics import mean_squared_error
def online_learning_loop(iter_num, vw_examples, vw_alg):
"""Implements the online learning loop.
"""
print('Online learning for', iter_num, 'steps...')
loss_list = []
y_predict_list = []
for i in range(iter_num):
vw_x = vw_examples[i]
y_true = float(vw_examples[i].split('|')[0])
# predict step
y_pred = vw_alg.predict(vw_x)
# learn step
vw_alg.learn(vw_x)
# calculate one step loss
loss = mean_squared_error([y_pred], [y_true])
loss_list.append(loss)
y_predict_list.append([y_pred, y_true])
return loss_list
max_iter_num = 10000 # or len(vw_examples)
```
### Vanilla Vowpal Wabbit (VW)
Create and run a vanilla vowpal wabbit learner.
```
from vowpalwabbit import pyvw
''' create a vanilla vw instance '''
vanilla_vw = pyvw.vw()
# online learning with vanilla VW
loss_list_vanilla = online_learning_loop(max_iter_num, vw_examples, vanilla_vw)
print('Final progressive validation loss of vanilla vw:', sum(loss_list_vanilla)/len(loss_list_vanilla))
```
### AutoVW which tunes namespace interactions
Create and run an AutoVW instance which tunes namespace interactions. Each AutoVW instance allows ```max_live_model_num``` of VW models (each associated with its own hyperaparameter configurations that are tuned online) to run concurrently in each step of the online learning loop.
```
''' import AutoVW class from flaml package '''
from flaml import AutoVW
'''create an AutoVW instance for tuning namespace interactions'''
autovw_ni = AutoVW(max_live_model_num=5, search_space={'interactions': AutoVW.AUTOMATIC})
# online learning with AutoVW
loss_list_autovw_ni = online_learning_loop(max_iter_num, vw_examples, autovw_ni)
print('Final progressive validation loss of autovw:', sum(loss_list_autovw_ni)/len(loss_list_autovw_ni))
```
### Online performance comparison between vanilla VW and AutoVW
```
import matplotlib.pyplot as plt
def plot_progressive_loss(obj_list, alias, result_interval=1,):
"""Show real-time progressive validation loss
"""
avg_list = [sum(obj_list[:i]) / i for i in range(1, len(obj_list))]
total_obs = len(avg_list)
warm_starting_point = 10 #0
plt.plot(range(warm_starting_point, len(avg_list)), avg_list[warm_starting_point:], label = alias)
plt.xlabel('# of data samples',)
plt.ylabel('Progressive validation loss')
plt.yscale('log')
plt.legend(loc='upper right')
plt.figure(figsize=(8, 6))
plot_progressive_loss(loss_list_vanilla, 'VanillaVW')
plot_progressive_loss(loss_list_autovw_ni, 'AutoVW:NI')
plt.show()
```
### AutoVW which tunes both namespace interactions and learning rate
Create and run an AutoVW instance which tunes both namespace interactions and learning rate.
```
from flaml.tune import loguniform
''' create another AutoVW instance for tuning namespace interactions and learning rate'''
# set up the search space and init config
search_space_nilr = {'interactions': AutoVW.AUTOMATIC, 'learning_rate': loguniform(lower=2e-10, upper=1.0)}
init_config_nilr = {'interactions': set(), 'learning_rate': 0.5}
# create an AutoVW instance
autovw_nilr = AutoVW(max_live_model_num=5, search_space=search_space_nilr, init_config=init_config_nilr)
# online learning with AutoVW
loss_list_autovw_nilr = online_learning_loop(max_iter_num, vw_examples, autovw_nilr)
print('Final progressive validation loss of autovw_nilr:', sum(loss_list_autovw_nilr)/len(loss_list_autovw_nilr))
```
### Online performance comparison between vanilla VW and two AutoVW instances
Compare the online progressive validation loss from the vanilla VW and two AutoVW instances.
```
plt.figure(figsize=(8, 6))
plot_progressive_loss(loss_list_vanilla, 'VanillaVW')
plot_progressive_loss(loss_list_autovw_ni, 'AutoVW:NI')
plot_progressive_loss(loss_list_autovw_nilr, 'AutoVW:NI+LR')
plt.show()
```
### AutoVW based on customized VW arguments
You can easily create an AutoVW instance based on customized VW arguments (For now only arguments that are compatible with supervised regression task are well supported). The customized arguments can be passed to AutoVW through init_config and search space.
```
''' create an AutoVW instance with ustomized VW arguments'''
# parse the customized VW arguments
fixed_vw_hp_config = {'alg': 'supervised', 'loss_function': 'classic'}
search_space = fixed_vw_hp_config.copy()
search_space.update({'interactions': AutoVW.AUTO_STRING})
autovw_custom = AutoVW(max_live_model_num=5, search_space=search_space)
loss_list_custom = online_learning_loop(max_iter_num, vw_examples, autovw_custom)
print('Average final loss of the AutoVW (tuning namespaces) based on customized vw arguments:', sum(loss_list_custom)/len(loss_list_custom))
```
| github_jupyter |
# Collecting and Preparing Text for Topic Modelling using Gensim
---
---
## More About Tokenising and Normalisation
In the last workshop, in notebook `workshop-1-basics/2-collecting-and-preparing.ipynb`, we cleaned and prepared the text _The Iliad of Homer_ (translated by Alexander Pope (1899)) by:
* Tokenising the text into individual words.
* Normalising the text:
* into lowercase,
* removing punctuation,
* removing non-words (empty strings, numerals, etc.),
* removing stopwords.
One form of normalisation we didn't do last time is making sure that different _inflections_ of the same word are counted together. In English, words are modified to express quantity, tense, etc. (i.e. _declension_ and _conjugation_ for those who remember their language lessons!).
For example, 'fish', 'fishes', 'fishy' and 'fishing' are all formed from the root 'fish'. Last workshop, all these words would have been counted as different words, which may or may not be desirable.
### Stemming and Lemmatization
There are two main ways to normalise for inflection:
* **Stemming** - reducing a word to a stem by removing endings (a **stem** may not be an actual word).
* **Lemmatization** - reducing a word to its meaningful base form using its context (a **lemma** is typically a proper word in the language).
To do this we can use several facilities provided by NLTK. There are many different ways to stem and lemmatize words, but we will compare the results of the [Porter Stemmer](https://tartarus.org/martin/PorterStemmer/) and [WordNet](https://wordnet.princeton.edu/) lemmatizer.
First, let's get the H.G. Wells book _The First Men on the Moon_ from Project Gutenberg:
```
import requests
response = requests.get('http://www.mirrorservice.org/sites/ftp.ibiblio.org/pub/docs/books/gutenberg/1/0/1/1013/1013.txt')
text = response.text
text[681:900]
```
Then we pick out one sentence from the book to use an example:
```
hg_wells = text[118017:118088]
hg_wells
```
Next we tokenise the sentence:
```
import nltk
nltk.download('punkt')
from nltk import word_tokenize
tokens = word_tokenize(hg_wells)
tokens
```
And use the Porter Stemmer to find the word stems:
```
from nltk import PorterStemmer
porter = PorterStemmer()
stems = [porter.stem(token) for token in tokens]
stems
```
To compare these stems with lemmas, we download the WordNet lemmatizer and use it:
```
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(token) for token in tokens]
lemmas
```
What do you think about the results? Perhaps surprisingly, the lemmatizer seems to have performed more poorly than the stemmer since `frothed` and `darting` have not been reduced to `froth` and `dart`.
The different rules used to stem and lemmatize words are called _algorithms_ and they can result in different stems and lemmas. If the precise details of this are important to your research, you should compare the results of the various algorithms. Stemmers and lemmatizers are also available in many languages, not just English.
---
### Going Further: Improving Lemmatization with Part-of-Speech (POS) Tagging
To improve the lemmatizer's performance we can tell it which _part of speech_ each word is, which is known as **part-of-speech tagging (POS tagging)**. A part of speech is the role a word plays in the sentence, e.g. verb, noun, adjective, etc.
NLTK has a POS tagger so let's download it:
```
nltk.download('averaged_perceptron_tagger')
# Generate the POS tags for each token
tags = nltk.pos_tag(tokens)
tags
```
These tags that NLTK generates are from the [Penn Treebank II tag set](https://www.clips.uantwerpen.be/pages/MBSP-tags). For example, now we know that `frothed` is a 'verb, past participle' (VBN).
Unfortunately, the NLTK lemmatizer accepts WordNet tags (`ADJ, ADV, NOUN, VERB = 'a', 'r', 'n', 'v'`) instead! In theory, at least, if we pass the tagging information to the lemmatizer, the results are better.
```
# Mapping of tokens to WordNet POS tags
tags = [('All', 'n'),
('about', 'n'),
('us', 'n'),
('on', 'n'),
('the', 'n'),
('sunlit', 'a'),
('slopes', 'v'),
('frothed', 'v'),
('and', 'n'),
('swayed', 'v'),
('the', 'n'),
('darting', 'a'),
('shrubs', 'n')]
lemmas = [lemmatizer.lemmatize(*tag) for tag in tags]
lemmas
```
Now `frothing` has been reduced to `froth`. In practice, however, we may wish to [experiment](https://www.machinelearningplus.com/nlp/lemmatization-examples-python/) with other lemmatizers to get the best results. The [SpaCy](https://spacy.io/) Python library has an excellent alternative lemmatizer, for example.
---
---
### Going Further: Beyond NLTK to SpaCy
NLTK was the first open-source Python library for Natural Language Processing (NLP), originally released in 2001, and it is still a valuable tool for teaching and research. Much of the literature uses NLTK code in its examples, which is why I chose to write this course using NLTK. As you may deduce from the parts-of-speech tagging example (above), NLTK does have its limitations though.
In many ways NLTK has been overtaken in efficiency and ease of use by other, more modern libraries, such as [SpaCy](https://spacy.io/). SpaCy is designed to use less computer memory and split workloads across multiple processor cores (or even computers) so that it can handle very large corpora easily. It also has excellent documentation. If you are serious about text-mining with Python for a large research dataset, I recommend that you try SpaCy. If you have understood the text-mining principles we have covered with NLTK, you will have no trouble using SpaCy as well.
---
---
---
## Gensim Python Library for Topic Modelling
[Gensim](https://radimrehurek.com/gensim/) is an open-source library that specialises in topic modelling. It is powerful, easy to use and is designed to work with very large corpora. (Another Python library, [scikit-learn](https://scikit-learn.org), also has topic modelling, but we won't cover that here.)
### Collecting the Example Corpus: US Presidential Inaugural Addresses
First, we are going to load a corpus of speeches `nltk.corpus.inaugural` that comes packaged into NLTK. This is the C-Span Inaugural Address Corpus (public domain) that contains the inaugural address of every US president from 1789–2009.
```
import nltk
nltk.download('inaugural')
inaugural = nltk.corpus.inaugural
```
To get an idea of what is inside, we can list the files:
```
files = inaugural.fileids()
files[0:10]
```
And examine the first few words of each file:
```
for file in files[0:10]:
print(inaugural.words(file))
```
---
#### Going Further: Corpora for Learning and Practicing Text-Mining
It is difficult to source pre-prepared corpora for learning and practicing text-mining. The documents must be good quality, easily available and distributed with a license that allows text-mining. NLTK comes with a number of corpora you can download from [`nltk_data`](http://www.nltk.org/nltk_data/) but these are quite old and limited in scope. It's worth searching around for [lists of corpora](https://nlpforhackers.io/corpora/) but bear in mind you must determine the true source and licensing of any corpus for yourself.
---
### Pre-Processing Text in Gensim
Before we can start to do topic modelling we must — of course! — clean and prepare the text by tokenising, removing stopwords, stemming, and so on. We could do this with NLTK, as we have learnt, but Gensim can do that for us too.
The defaults of `preprocess_string()` and `preprocess_documents()` use the following _filters_:
* Strip any HTML or XML tags
* Replace punctuation characters with spaces
* Remove repeating whitespace characters and turn tabs and line breaks into spaces
* Remove digits
* Remove stopwords
* Remove words with length less than 3 characters
* Lowercase
* Stem the words using a Porter Stemmer
Using Gensim, we will preprocess just the _first_ file in the corpus as an example:
```
import gensim
from gensim.parsing.preprocessing import *
washington = files[0]
text = inaugural.raw(washington)
tokens = preprocess_string(text)
tokens[0:10]
```
Hmm, what has happened here to our tokens? 😕
The Porter Stemmer that comes with Gensim does not give us real words, but this will make our topics less readable.
We can do something about this, but the code is a bit more advanced. Feel free to skip over the next section and start reading again at 'Pre-Processing the Corpus and Saving to File'.
---
#### Going Further: Using SpaCy's Lemmatizer to Get Real Words
In order to lemmatize the words instead, we have to specify a _list of filters_ that we want `preprocess_string()` to apply.
Before that we will import an alternative lemmatizer from the [SpaCy](https://spacy.io/) library, as it is a better by default than the NLTK one.
```
!spacy download en
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en.lemmatizer import LOOKUP
lemmatize = Lemmatizer(lookup=LOOKUP).lookup
lemmatize('swayed')
```
(👆👆👆 All you need to understand here is that we using SpaCy's lemmatizer rather than NLTK's. If you don't understand the code, you can skip over it and continue.)
Now we apply a list of filters, which are in fact the same as defaults, except with the string method `lower()` and without the Gensim stemmer:
```
filters = [strip_tags,
strip_punctuation,
strip_multiple_whitespaces,
strip_numeric,
remove_stopwords,
strip_short,
str.lower]
# Pre-process the tokens with the filters
tokens = preprocess_string(text, filters=filters)
# Lemmatize the filtered tokens with SpaCy's lemmatizer
lemmas = [lemmatize(token) for token in tokens]
lemmas[0:10]
```
Now we have real words for the tokens, instead of awkward stems. We'll use these lemmatized tokens for our topic modelling example.
---
---
---
## Pre-Processing the Corpus and Saving to File
### Reading Files and Writing to File
Last workshop I glossed over how we save to text files read them back in again. I offered this guide [Reading and Writing Files in Python](https://realpython.com/read-write-files-python/#opening-and-closing-a-file-in-python), which is an excellent in-depth look that I recommend.
In brief, in order to open files we use the `open()` function and the keyword `with`.
For reading:
`with open(file, 'r') as reader:`
For writing:
`with open(file, 'w') as writer:`
Then whatever you put inside the code block will run with the file open and ready. Once your code has finished running the file is safely closed.
We can create and then write a text file with the `write()` method:
```
with open('blackhole.txt', 'w') as writer:
writer.write('At the center of a black hole lies a singularity.')
```
> Now go to the Jupyter notebook folder `workshop-2-topic-modelling`, open the newly created text file `blackhole.txt` and inspect its contents!
We can read this file back in to a string with the `read()` method:
```
with open('blackhole.txt', 'r') as reader:
sentence = reader.read()
sentence
```
To write line by line (instead of the whole file at once) use `writelines()`, and likewise, to read one line at a time use `readlines()`. For all the details, see the tutorial linked above.
### Pre-Processing Speeches and Saving Tokens to Text File
We can now put everything we have learnt together to pre-process our entire corpus of speeches, and save the clean lemma tokens to text files, ready to be loaded in the next notebook `3-topic-modelling-and-visualising.ipynb`.
Let's step through this code now:
1. Create a location for the `data/inaugural` folder where we want to save the files:
```
from pathlib import Path
location = Path('data', 'inaugural-test')
```
2. Loop over all the files in turn, using the Gensim `preprocess_string` function to prepare them, and save them as individual files:
```
for file in files:
print(f'Processing file: {file}')
text = inaugural.raw(file)
tokens = preprocess_string(text, filters=filters)
lemmas = [lemmatize(token) for token in tokens]
with open(location / file, 'w') as writer:
writer.write(' '.join(lemmas))
```
> Feel free to inspect these files now in the folder `data/inaugural-test`. If for some reason you have changed the code and it's not worked properly, don't worry! I've created a proper set to use in `data/inaugural`.
---
---
## Summary
In this notebook we have covered:
* Stemming and lemmatization
* Gensim Python library for topic modelling
* Pre-processing the text with Gensim
* Reading from and writing to text files
👌👌👌
In the next notebook `3-topic-modelling-and-visualising.ipynb` we will walk through a full example of topic modelling using Gensim and the speeches we have prepared.
| github_jupyter |
```
#@title ##### License { display-mode: "form" }
# Copyright 2019 DeepMind Technologies Ltd. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# OpenSpiel
* This Colab gets you started with installing OpenSpiel and its dependencies.
* OpenSpiel is a framework for reinforcement learning in games.
* The instructions are adapted from [here](https://github.com/deepmind/open_spiel/blob/master/docs/install.md).
## Install dependencies and clone repository
Let's first check the Python version, make sure to use a Python 3 runtime.
```
!python --version
```
Clone [open_spiel](https://github.com/deepmind/open_spiel) repository and pull in source dependencies: [pybind11](https://github.com/pybind/pybind11), [DDS](https://github.com/jblespiau/dds), [abseil](https://github.com/abseil)
```
INSTALL_DIR = '/usr/local/open_spiel'
!git config --global advice.detachedHead false
!git clone https://github.com/deepmind/open_spiel $INSTALL_DIR
!git clone -b 'v2.2.4' --single-branch --depth 1 https://github.com/pybind/pybind11.git $INSTALL_DIR/pybind11
!git clone -b 'develop' --single-branch --depth 1 https://github.com/jblespiau/dds.git $INSTALL_DIR/open_spiel/games/bridge/double_dummy_solver
!git clone -b '20200225.1' --single-branch --depth 1 https://github.com/abseil/abseil-cpp.git $INSTALL_DIR/open_spiel/abseil-cpp
#@title Optional dependencies: { display-mode: "both" }
BUILD_WITH_HANABI = False #@param {type:"boolean"}
BUILD_WITH_ACPC = False #@param {type:"boolean"}
if BUILD_WITH_HANABI:
%env BUILD_WITH_HANABI=ON
!git clone -b 'master' --single-branch --depth 15 https://github.com/deepmind/hanabi-learning-environment.git $INSTALL_DIR/open_spiel/games/hanabi/hanabi-learning-environment
!pushd $INSTALL_DIR/open_spiel/games/hanabi/hanabi-learning-environment && git checkout 'b31c973' && popd
if BUILD_WITH_ACPC:
%env BUILD_WITH_ACPC=ON
!git clone -b 'master' --single-branch --depth 1 https://github.com/jblespiau/project_acpc_server.git $INSTALL_DIR/open_spiel/games/universal_poker/acpc
```
Installing Python requirements:
```
# we keep some baked-in Colab dependencies:
!sed -e '/IPython/d' -e '/pip/d' -e '/matplotlib/d' $INSTALL_DIR/requirements.txt >> /tmp/requirements.txt
!pip3 install -r /tmp/requirements.txt
```
## Build `open_spiel`
```
!apt-get install clang-9
!mkdir -p $INSTALL_DIR/build
%cd $INSTALL_DIR/build
!cmake -DPython3_EXECUTABLE=`which python3` -DCMAKE_CXX_COMPILER=`which clang++-9` ../open_spiel
!make -j$(nproc)
%cd /content
```
## Set `PYTHONPATH`
```
import sys
import os
sys.path.append(INSTALL_DIR)
sys.path.append(os.path.join(INSTALL_DIR, 'build/python')) # for pyspiel.so
# verify that Python can find the open_spiel & pyspiel modules
import importlib
assert importlib.util.find_spec("open_spiel") is not None
assert importlib.util.find_spec("pyspiel") is not None
```
## (optional) Run `CMake` tests
```
# run_python_test calls the python interpreter directly thus setting PYTHONPATH
%set_env PYTHONPATH=/env/python:$INSTALL_DIR:$INSTALL_DIR/build/python
!pushd $INSTALL_DIR/build && ctest -j$(nproc) --output-on-failure ../open_spiel && popd
```
# It's play time!
```
import numpy as np
import pyspiel
game = pyspiel.load_game("tic_tac_toe")
state = game.new_initial_state()
while not state.is_terminal():
state.apply_action(np.random.choice(state.legal_actions()))
print(str(state) + '\n')
```
| github_jupyter |
# Using a feature representation learned for signature images
This notebook contains code to pre-process signature images and to obtain feature-vectors using the learned feature representation on the GPDS dataset
```
import torch
# Functions to load and pre-process the images:
from skimage.io import imread
from skimage import img_as_ubyte
from sigver.preprocessing.normalize import (
normalize_image, resize_image,
crop_center, preprocess_signature)
# Functions to load the CNN model
from sigver.featurelearning.models import SigNet
# Functions for plotting:
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['image.cmap'] = 'Greys'
```
## Pre-processing a single image
```
def load_signature(path):
return img_as_ubyte(imread(path, as_gray=True))
original = load_signature('data/some_signature.png')
# Manually normalizing the image following the steps provided in the paper.
# These steps are also implemented in preprocess.normalize.preprocess_signature
normalized = 255 - normalize_image(original, (952, 1360))
resized = resize_image(normalized, (170, 242))
cropped = crop_center(resized, (150,220))
# Visualizing the intermediate steps
f, ax = plt.subplots(4,1, figsize=(6,15))
ax[0].imshow(original, cmap='Greys_r')
ax[1].imshow(normalized)
ax[2].imshow(resized)
ax[3].imshow(cropped)
ax[0].set_title('Original')
ax[1].set_title('Background removed/centered')
ax[2].set_title('Resized')
ax[3].set_title('Cropped center of the image')
```
## Processing multiple images and obtaining feature vectors
```
user1_sigs = [load_signature('data/a{}.png'.format(i)) for i in [1,2]]
user2_sigs = [load_signature('data/b{}.png'.format(i)) for i in [1,2]]
canvas_size = (952, 1360)
processed_user1_sigs = torch.tensor([preprocess_signature(sig, canvas_size) for sig in user1_sigs])
processed_user2_sigs = torch.tensor([preprocess_signature(sig, canvas_size) for sig in user2_sigs])
# Shows pre-processed samples of the two users
f, ax = plt.subplots(2,2, figsize=(10,6))
ax[0,0].imshow(processed_user1_sigs[0])
ax[0,1].imshow(processed_user1_sigs[1])
ax[1,0].imshow(processed_user2_sigs[0])
ax[1,1].imshow(processed_user2_sigs[1])
# Inputs need to have 4 dimensions (batch x channels x height x width), and also be between [0, 1]
processed_user1_sigs = processed_user1_sigs.view(-1, 1, 150, 220).float().div(255)
processed_user2_sigs = processed_user2_sigs.view(-1, 1, 150, 220).float().div(255)
```
### Using the CNN to obtain the feature representations
```
# If GPU is available, use it:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device: {}'.format(device))
# Load the model
state_dict, _, _ = torch.load('models/signet.pth')
base_model = SigNet().to(device).eval()
base_model.load_state_dict(state_dict)
# Obtain the features. Note that you can process multiple images at the same time
with torch.no_grad():
user1_features = base_model(processed_user1_sigs.to(device))
user2_features = base_model(processed_user2_sigs.to(device))
```
### Inspecting the learned features
The feature vectors have size 2048:
```
user1_features.shape
print('Euclidean distance between signatures from the same user')
print(torch.norm(user1_features[0] - user1_features[1]))
print(torch.norm(user2_features[0] - user2_features[1]))
print('Euclidean distance between signatures from different users')
dists = [torch.norm(u1 - u2).item() for u1 in user1_features for u2 in user2_features]
print(dists)
# Other models:
# model_weight_path = 'models/signetf_lambda0.95.pkl'
```
| github_jupyter |
PyGSLIB
========
PPplot
---------------
```
#general imports
import pygslib
```
Getting the data ready for work
---------
If the data is in GSLIB format you can use the function `pygslib.gslib.read_gslib_file(filename)` to import the data into a Pandas DataFrame.
```
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat')
true= pygslib.gslib.read_gslib_file('../data/true.dat')
true['Declustering Weight'] = 1
```
## gslib probplot with bokeh
```
parameters_probplt = {
# gslib parameters for histogram calculation
'iwt' : 0, # input boolean (Optional: set True). Use weight variable?
'va' : mydata['Primary'], # input rank-1 array('d') with bounds (nd). Variable
'wt' : mydata['Declustering Weight'], # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight.
# visual parameters for figure (if a new figure is created)
'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Prob blot', # string (Optional, "Histogram"). Figure title
'xlabel' : 'Primary', # string (Optional, default "Z"). X axis label
'ylabel' : 'P[Z<c]', # string (Optional, default "f(%)"). Y axis label
'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale.
'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
'style' : 'cross', # string with valid bokeh chart type
'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Non declustered', # string (Optional, default "NA").
'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
'lwidth': 0, # float (Optional, default 1). Line width
# leyend
'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center
parameters_probplt_dcl = parameters_probplt.copy()
parameters_probplt_dcl['iwt']=1
parameters_probplt_dcl['legend']='Declustered'
parameters_probplt_dcl['color'] = 'red'
parameters_probplt_true = parameters_probplt.copy()
parameters_probplt_true['va'] = true['Primary']
parameters_probplt_true['wt'] = true['Declustering Weight']
parameters_probplt_true['iwt']=0
parameters_probplt_true['legend']='True'
parameters_probplt_true['color'] = 'black'
parameters_probplt_true['style'] = 'line'
parameters_probplt_true['lwidth'] = 1
results, fig = pygslib.plothtml.probplt(parameters_probplt)
# add declustered to the plot
parameters_probplt_dcl['figure']= fig
results, fig = pygslib.plothtml.probplt(parameters_probplt_dcl)
# add true CDF to the plot
parameters_probplt_true['figure']=parameters_probplt_dcl['figure']
results, fig = pygslib.plothtml.probplt(parameters_probplt_true)
# show the plot
pygslib.plothtml.show(fig)
```
| github_jupyter |
In order to get the best use out of the Panel user guide, it is important to have a grasp of some core concepts, ideas, and terminology.
### Components
Panel provides three main types of component: ``Pane``, ``Widget``, and ``Panel``. These components are introduced and explained in the [Components user guide](./Components.ipynb), but briefly:
* **``Pane``**: A ``Pane`` wraps a user supplied object of almost any type and turns it into a renderable view. When the wrapped ``object`` or any parameter changes, a pane will update the view accordingly.
* **``Widget``**: A ``Widget`` is a control component that allows users to provide input to your app or dashboard, typically by clicking or editing objects in a browser, but also controllable from within Python.
* **``Panel``**: A ``Panel`` is a hierarchical container to lay out multiple components (panes, widgets, or other ``Panel``s) into an arrangement that forms an app or dashboard.
---
### APIs
Panel is a very flexible system that supports many different usage patterns, via multiple application programming interfaces (APIs). Each API has its own advantages and disadvantages, and is suitable for different tasks and ways of working. The [API user guide](APIs.ipynb) goes through each of the APIs in detail, comparing their pros and cons and providing recommendations on when to use each.
#### [``interact``](./Interact.ipynb)
The ``interact`` API will be familiar to ipywidgets users; it provides a very simple API to define an interactive view of the results of a Python function. This approach works by declaring functions whose arguments will be inspected to infer a set of widgets. Changing any of the resulting widgets causes the function to be re-run, updating the displayed output. This approach makes it extremely easy to get started and also easy to rearrange and reconfigure the resulting plots and widgets, but it may not be suited to more complex scenarios. See the [Interact user guide](./Interact.ipynb) for more detail.
#### Reactive functions
Defining a reactive function using the ``pn.depends`` decorator provides an explicit way to link specific inputs (such as the value of a widget) to some computation in a function, reactively updating the output of the function whenever the parameter changes. This approach is a highly convenient, intuitive, and flexible way of building interactive UIs.
#### [``Param``](./Param.ipynb)
``Panel`` itself is built on the [param](https://param.pyviz.org) library, which allows capturing parameters and their allowable values entirely independently of any GUI code. By using Param to declare the parameters along with methods that depend on those parameters, even very complex GUIs can be encapsulated in a tidy, well-organized, maintainable, and declarative way. Panel will automatically convert parameter definition to corresponding widgets, allowing the same codebase to support command-line, batch, server, and GUI usage. This API requires the use of the param library to express the inputs and encapsulate the computations to be performed, but once implemented this approach leads to flexible, robust, and well encapsulated code. See the Panel [Param user guide](./Param.ipynb) for more detail.
#### [Callback API](./Widgets.ipynb)
At the lowest level, you can build interactive applications using ``Pane``, ``Widget``, and ``Panel`` components and connect them using explicit callbacks. Registering callbacks on components to modify other components provides full flexibility in building interactive features, but once you have defined numerous callbacks it can be very difficult to track how they all interact. This approach affords the most amount of flexibility but can easily grow in complexity, and is not recommended as a starting point for most users. That said, it is the interface that all the other APIs are built on, so it is powerful and is a good approach for building entirely new ways of working with Panel, or when you need some specific behavior not covered by the other APIs. See the [Widgets user guide](./Widgets.ipynb) and [Links user guide](./Links.ipynb) for more detail.
---
### Display and rendering
Throughout this user guide we will cover a number of ways to display Panel objects, including display in a Jupyter notebook, in a standalone server, by saving and embedding, and more. For a detailed description see the [Deploy and Export user guide](./Deploy_and_Export.ipynb).
#### Notebook
All of Panel's documentation is built from Jupyter notebooks that you can explore at your own pace. Panel does not require Jupyter in any way, but it has extensive Jupyter support:
##### ``pn.extension()``
> The Panel extension loads BokehJS, any custom models required, and optionally additional custom JS and CSS in Jupyter notebook environments. It also allows passing any [`pn.config`](#pn.config) variables
##### ``pn.ipywidget()``
> Given a Panel model `pn.ipywidget` will return an ipywidget model that renders the object in the notebook. This can be useful for including an panel widget in an ipywidget layout and deploying Panel objects using [Voilà](https://github.com/voila-dashboards/voila/).
##### Rich display
Jupyter notebooks allow the final value of a notebook cell to display itself, using a mechanism called [rich display](https://ipython.readthedocs.io/en/stable/config/integrating.html#rich-display). As long as `pn.extension()` has been called in a notebook, all Panel components (widgets, panes, and panels) will display themselves when placed on the last line of a notebook cell.
##### ``.app()``
> The ``.app()`` method present on all viewable Panel objects allows displaying a Panel server process inline in a notebook, which can be useful for debugging a standalone server interactively.
#### Python REPL
Even when working in a Python REPL that does not support rich-media output (e.g. in a text-based terminal), a panel can be still be launched in a browser tab:
##### ``.show()``
> The ``.show()`` method is present on all viewable Panel objects and starts a server instance then opens a browser tab to point to it. To support working remotely, a specific port on which to launch the app can be supplied.
##### ``pn.serve()``
>Similar to .show() on a Panel object but allows serving one or more Panel apps on a single server. Supplying a dictionary mapping from the URL slugs to the individual Panel objects being served allows launching multiple apps at once.
#### Command line
Panel mirrors Bokeh's command-line interface for launching and exporting apps and dashboards:
##### ``panel serve app.py``
> The ``panel serve`` command allows allows interactively displaying and deploying Panel web-server apps from the commandline.
##### ``panel serve app.ipynb``
> ``panel serve`` also supports using Jupyter notebook files, where it will serve any Panel objects that were marked `.servable()` in a notebook cell. This feature allows you to maintain a notebook for exploring and analysis that provides certain elements meant for broader consumption as a standalone app.
#### Export
When not working interactively, a Panel object can be exported to a static file.
##### ``.save()`` to PNG
> The ``.save`` method present on all viewable Panel objects allows saving the visual representation of a Panel object to a PNG file.
##### ``.save()`` to HTML
> ``.save`` to HTML allows sharing the full Panel object, including any static links ("jslink"s) between widgets and other components, but other features that depend on having a live running Python process will not work (as for many of the Panel webpages).
#### Embedding
Panel objects can be serialized into a static JSON format that captures the widget state space and the corresponding plots or other viewable items for each combination of widget values, allowing fully usable Panel objects to be embedded into external HTML files or emails. For simple cases, this approach allows distributing or publishing Panel apps that no longer require a Python server in any way. Embedding can be enabled when using ``.save()``, using the ``.embed()`` method or globally using [Python and Environment variables](#Python and Environment variables) on ``pn.config``.
##### ``.embed()``
> The ``.embed()`` method embeds the contents of the object it is being called on in the notebook.
___
### Linking and callbacks
One of the most important aspects of a general app and dashboarding framework is the ability to link different components in flexible ways, scheduling callbacks in response to internal and external events. Panel provides convenient lower and higher-level APIs to achieve both. For more details, see the [Links](./Links.ipynb) user guide.
##### ``.param.watch``
> The ``.param.watch`` method allows listening to parameter changes on an object using Python callbacks. It is the lowest level API and provides the most amount of control, but higher-level APIs are more appropriate for most users and most use cases.
##### ``.link()``
> The Python-based ``.link()`` method present on all viewable Panel objects is a convenient API to link the parameters of two objects together, uni- or bi-directionally.
##### ``.jscallback``
> The Javascript-based ``.jscallback()`` method allows defining arbitrary Javascript code to be executed when some property changes or event is triggered.
##### ``.jslink()``
> The JavaScript-based ``.jslink()`` method directly links properties of the underlying Bokeh models, making it possible to define interactivity that works even without a running Python server.
___
### State and configuration
Panel provides top-level objects to hold current state and control high-level configuration variables.
##### `pn.config`
The `pn.config` object allows setting various configuration variables, the config variables can also be set as environment variables or passed through the [`pn.extension`](#pn.extension()):
##### Python only
> - `css_files` (: External CSS files to load.
> - `js_files`: External JS files to load. Dictionary should map from exported name to the URL of the JS file.
> - `raw_css`: List of raw CSS strings to add to load.
> - `safe_embed`: Whether to record all set events when embedding rather than just those that are changed
> - `sizing_mode`: Specify the default sizing mode behavior of panels.
#### Python and Environment variables
> - `comms` (`PANEL_COMMS`): Whether to render output in Jupyter with the default Jupyter extension or use the `jupyter_bokeh` ipywidget model.
> - `console_output` (`PANEL_CONSOLE_OUTPUT`): How to log errors and stdout output triggered by callbacks from Javascript in the notebook. Options include `'accumulate'`, `'replace'` and `'disable'`.
> - `embed` (`PANEL_EMBED`): Whether plot data will be [embedded](./Deploy_and_Export.ipynb#Embedding).
> - `embed_json` (`PANEL_EMBED_JSON`): Whether to save embedded state to json files.
> - `embed_json_prefix` (`PANEL_EMBED_JSON_PREFIX`): Prefix for randomly generated json directories.
> - `embed_load_path` (`PANEL_EMBED_LOAD_PATH`): Where to load json files for embedded state.
> - `embed_save_path` (`PANEL_EMBED_SAVE_PATH`): Where to save json files for embedded state.
> - `inline` (`PANEL_INLINE`): Whether to inline JS and CSS resources. If disabled, resources are loaded from CDN if one is available.
##### `pn.state`
The `pn.state` object makes various global state available and provides methods to manage that state:
> - `cache`: A global cache which can be used to share data between different processes.
> - `cookies`: HTTP request cookies for the current session.
> - `curdoc`: When running a server session this property holds the current bokeh Document.
> - `location`: In a server context this provides read and write access to the URL:
* `hash`: hash in window.location e.g. '#interact'
* `pathname`: pathname in window.location e.g. '/user_guide/Interact.html'
* `search`: search in window.location e.g. '?color=blue'
* `reload`: Reloads the page when the location is updated.
* `href` (readonly): The full url, e.g. 'https://localhost:80?color=blue#interact'
* `hostname` (readonly): hostname in window.location e.g. 'panel.holoviz.org'
* `protocol` (readonly): protocol in window.location e.g. 'http:' or 'https:'
* `port` (readonly): port in window.location e.g. '80'
> - `headers`: HTTP request headers for the current session.
> - `session_args`: When running a server session this return the request arguments.
> - `webdriver`: Caches the current webdriver to speed up export of bokeh models to PNGs.
> #### Methods
> - `kill_all_servers`: Stops all running server sessions.
| github_jupyter |
# Lab 4 Accuracy of Quantum Phase Estimation
Prerequisite
- [Ch.3.5 Quantum Fourier Transform](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html)
- [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
Other relevant materials
- [QCQI] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information
```
from qiskit import *
import numpy as np
from qiskit.visualization import plot_histogram
import qiskit.tools.jupyter
from qiskit.tools.monitor import job_monitor
from qiskit.ignis.mitigation.measurement import *
import matplotlib.pyplot as plt
```
<h2 style="font-size:24px;">Part 1: Performance of Quantum Phase Estimation</h2>
<br>
<div style="background: #E8E7EB; border-radius: 5px;
-moz-border-radius: 5px;">
<p style="background: #800080;
border-radius: 5px 5px 0px 0px;
padding: 10px 0px 10px 10px;
font-size:18px;
color:white;
"><b>Goal</b></p>
<p style=" padding: 0px 0px 10px 10px;
font-size:16px;">Investigate the relationship between the number of qubits required for the desired accuracy of the phase estimation with high probability.</p>
</div>
The accuracy of the estimated value through Quantum Phase Estimation (QPE) and its probability of success depend on the number of qubits employed in QPE circuits. Therefore, one might want to know the necessary number of qubits to achieve the targeted level of QPE performance, especially when the phase that needs to be determined cannot be decomposed in a finite bit binary expansion.
In Part 1 of this lab, we examine the number of qubits required to accomplish the desired accuracy and the probability of success in determining the phase through QPE.
<h3 style="font-size: 20px">1. Find the probability of obtaining the estimation for a phase value accurate to $2^{-2}$ successfully with four counting qubits.</h3>
<h4 style="font-size: 17px">📓Step A. Set up the QPE circuit with four counting qubits and save the circuit to the variable 'qc4'. Execute 'qc4' on a qasm simulator. Plot the histogram of the result.</h4>
Check the QPE chapter in Qiskit textbook ( go to `3. Example: Getting More Precision` section [here](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) ) for the circuit.
```
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(n)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
## Start your code to create the circuit, qc4
qc4.draw()
## Run this cell to simulate 'qc4' and to plot the histogram of the result
sim = Aer.get_backend('qasm_simulator')
shots = 20000
count_qc4 = execute(qc4, sim, shots=shots).result().get_counts()
plot_histogram(count_qc4, figsize=(9,5))
```
Having performed `Step A` successfully, you will have obtained a distribution similar to the one shown below with the highest probability at `0101` which corresponds to the estimated $\phi$ value, `0.3125`.

Since the number of counting qubits used for the circuit is four, the best estimated value should be accurate to $\delta = 2^{-4} = 0.0625$. However, there are multiple possible outcomes as $\theta = 1/3$ cannot be expressed in a finite number of bits, the estimation by QPE here is not always bounded by this accuracy.
Running the following cell shows the same histogram but with all possible estimated $\phi$ values on the x-axis.
```
phi_est = np.array([round(int(key, 2)/2**t,3) for key in list(count_qc4.keys())])
key_new = list(map(str, phi_est))
count_new = dict(zip(key_new, count_qc4.values()))
plot_histogram(count_new, figsize=(9,5))
```
**Suppose the outcome of the final measurement is $m$, and let $b$ the best estimation which is `0.3125` for this case.**
<h4 style="font-size: 17px">📓Step B. Find $e$, the maximum difference in integer from the best estimation <code>0101</code> so that all the outcomes, 'm's, would approximate $\phi$ to an accuracy $2^{-2}$ when $|m - b| \leq \frac{e}{2^{t}}$. </h4>
In this case, the values of $t$ and $b$ are $4$ and $0.3125$, respectively.
For example, under $e = 1$, the considered outcomes are `0100`, `0101`, `0110` which correspond to the values of $m$: $0.25,~0.312,~0.375$, respectively, and all of them approximate the value $\frac{1}{3}$ to an accuracy $2^{-2}$.
```
## Your code goes here
```
<h4 style="font-size: 17px">📓Step C. Compute the probability of obtaining an approximation correct to an accuracy $2^{-2}$. Verify that the computed probability value is larger or equal to $1- \frac{1}{2(2^{(t-n)}-2)}$ where $t$ is the number of counting bits and the $2^{-n}$ is the desired accuracy. </h4>
Now it is easy to evaluate the probability of the success from the histogram since all the outcomes that approximate $\phi$ to the accuracy $2^{-2}$ can be found based on the maximum difference $e$ from the best estimate.
```
## Your code goes here
```
<h3 style="font-size: 20px">2. Compute the probability of success for the accuracy $2^{-2}$ when the number of counting qubits, $t$, varies from four to nine. Compare your result with the equation $t=n+log(2+\frac{1}{2\epsilon})$ when $2^{-n}$ is the desired accuracy and $\epsilon$ is 1 - probability of success.</h3>
The following plot shows the relationship between the number of counting qubit, t, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Check the Ch. 5.2.1 Performance and requirements in `[QCQI]`.
```
y = lambda t, n: 1-1/(2*(2**(t-n)-2))
t_q = np.linspace(3.5, 9.5, 100 )
p_min = y(t_q, 2)
plt.figure(figsize=(7, 5))
plt.plot(t_q, p_min, label='$p_{min}$')
plt.xlabel('t: number of counting qubits')
plt.ylabel('probability of success for the accuracy $2^{-2}$')
plt.legend(loc='lower right')
plt.title('Probability of success for different number of counting qubits')
plt.show()
```
<h4 style="font-size: 17px">📓Step A. Construct QPE circuit to estimate $\phi$ when $\phi = 1/3$ with for the different number of counting qubits, $t$, when $t = [4, 5, 6, 7, 8, 9]$. Store all the circuits in a list variable 'circ' to simulate all the circuits at once as we did in Lab3. </h4>
```
## Your Code to create the list variable 'circ' goes here
# Run this cell to simulate `circ` and plot the histograms of the results
results = execute(circ, sim, shots=shots).result()
n_circ = len(circ)
counts = [results.get_counts(idx) for idx in range(n_circ)]
fig, ax = plt.subplots(n_circ,1,figsize=(25,40))
for idx in range(n_circ):
plot_histogram(counts[idx], ax=ax[idx])
plt.tight_layout()
```
<h4 style="font-size: 17px">📓Step B. Determine $e$, the maximum difference in integer from the best estimation for the different numer of counting qubits, $t = [4, 5, 6, 7, 8, 9]$. Verify the relationship $e=2^{t-n}-1$ where $n=2$ since the desired accuracy is $2^{-2}$ in this case. </h4>
```
## Your Code goes here
```
If you successfully calculated $e$ values for all the counting qubits, $t=[4,5,6,7,8,9]$, you will be able to generate the following graph that verifies the relationship $e = 2^{t-2} -1$ with the $e$ values that you computed.

<h4 style="font-size: 17px">📓Step C. Evaluate the probability of success estimating $\phi$ to an accuracy $2^{-2}$ for all the values of $t$, the number of counting qubits. Save the probabilities to the list variable, 'prob_success'. </h4>
```
## Your code to create the list variable, 'prob_success', goes here
```
<h4 style="font-size: 17px">📓Step D. Overlay the results of Step C on the graph that shows the relationship between the number of counting qubits, $t$, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Understand the result. </h4>
```
## Your code goes here
```

Your plot should be similar to the above one.
The line plot in the left pannel shows the minimum success probability to estimate $\phi$ within the accuracy $2^{-2}$ as the number of counting qubits varies. The overlayed orange dots are the same values, but from the simulation, which confirms the relationship the line plot represents as the lower bound. The right pannel displays the same result but zoomed by adjusting the y-axis range.
The following graph exhibits the relationships with different accuracy levels. The relationship, $t=n+log(2+\frac{1}{2\epsilon})$, indicates the number of counting qubits $t$ to estimate $\phi$ to an accuracy $2^{-2}$ with probability of success at least $1-\epsilon$, as we validated above.
```
t = np.linspace(5.1, 10, 100)
prob_success_n = [y(t, n) for n in [2, 3, 4]]
prob_n2, prob_n3, prob_n4 = prob_success_n[0], prob_success_n[1], prob_success_n[2]
plt.figure(figsize=(7, 5))
plt.plot(t, prob_n2, t, prob_n3, t, prob_n4, t, [1]*len(t),'--' )
plt.axis([5, 10, 0.7, 1.05])
plt.xlabel('t: number of counting qubits')
plt.ylabel('probability of success for the accuracy $2^{-n}$')
plt.legend(['n = 2', 'n = 3', 'n = 4'], loc='lower right')
plt.grid(True)
```
<h2 style="font-size:24px;">Part 2: QPE on Noisy Quantum System</h2>
<br>
<div style="background: #E8E7EB; border-radius: 5px;
-moz-border-radius: 5px;">
<p style="background: #800080;
border-radius: 5px 5px 0px 0px;
padding: 10px 0px 10px 10px;
font-size:18px;
color:white;
"><b>Goal</b></p>
<p style=" padding: 0px 0px 10px 10px;
font-size:16px;">Run the QPE circuit on a real quantum system to understand the result and limitations when using noisy quantum systems</p>
</div>
The accuracy anaylsis that we performed in Part 1 would not be correct when the QPE circuit is executed on present day noisy quantum systems. In part 2, we will obtain QPE results by running the circuit on a backend from IBM Quantum Experience to examine how noise affects the outcome and learn techniques to reduce its impact.
<h4 style="font-size: 17px">📓Step A. Load your account and select the backend from your provider. </h4>
```
## Your code goes here.
```
<h4 style="font-size: 17px">📓Step B. Generate multiple ( as many as you want ) transpiled circuits of <code>qc4</code> that you set up in Part 1 at the beginning. Choose one with the minimum circuit depth, and the other with the maximum circuit depth.</h4>
Transpile the circuit with the parameter `optimization_level = 3` to reduce the error in the result. As we learned in Lab 1, Qiskit by default uses a stochastic swap mapper to place the needed SWAP gates, which varies the tranpiled circuit results even under the same runtime settings. Therefore, to achieve shorter depth transpiled circuit for smaller error in the outcome, transpile `qc4` multiple times and choose one with the minimum circuit depth. Select the maximum circuit depth one as well for comparison purposes.
```
## Your code goes here
```
<h4 style="font-size: 17px">📓Step C. Execute both circuits on the backend that you picked. Plot the histogram for the results and compare them with the simulation result in Part 1.</h4>
```
## Your code goes here
```
The following shows the sample result.

<h4 style="font-size: 17px">Step D. Measurement Error Mitigation </h4>
In the previous step, we utilized our knowledge about Qiskit transpiler to get the best result. Here, we try to mitigate the errors in the result further through the measurement mitigation technique that we learned in Lab 3.
<p>📓Construct the circuits to profile the measurement errors of all basis states using the function 'complete_meas_cal'. Obtain the measurement filter object, 'meas_filter', which will be applied to the noisy results to mitigate readout (measurement) error.
```
## Your Code goes here
```
<p>📓Plot the histogram of the results before and after the measurement error mitigation to exhibit the improvement.
```
## Your Code goes here
```
The following plot shows the sample result.

The figure below displays a simulation result with the sample final results from both the best and worst SWAP mapping cases after applying the measurement error mitigation. In Lab 3, as the major source of the error was from the measurement, after the error mitigation procedure, the outcomes were significantly improved. For QPE case, however, the measurement error doesn't seem to be the foremost cause for the noise in the result; CNOT gate errors dominate the noise profile. In this case, choosing the transpiled circuit with the least depth was the crucial procedure to reduce the errors in the result.

| github_jupyter |
# Activity 7 - Dummy Variables
For this activity, we will use the Austin, Texas weather dataset that we used in the
previous activity. In this activity, we will use dummy variables to enhance our linear
regression model for this dataset.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# Loading the data from activity 1
df = pd.read_csv('activity2_measurements.csv')
df_first_year = pd.read_csv('activity_first_year.csv')
rolling = pd.read_csv('activity2_rolling.csv')
window = 20
# Trendline values
trend_x = np.array([
1,
182.5,
365
])
```
Plot the raw data (df) and moving average (rolling)
```
fig = plt.figure(figsize=(10, 7))
ax = fig.add_axes([1, 1, 1, 1]);
# Temp measurements
ax.scatter(df_first_year.DayOfYear, df_first_year.TempAvgF, label='Raw Data');
ax.plot(df_first_year.DayOfYear, rolling, c='r', label=f'{window} day moving average');
ax.set_title('Daily Mean Temperature Measurements')
ax.set_xlabel('Day')
ax.set_ylabel('Temperature (degF)')
ax.set_xticks(range(1, 366), 10)
ax.legend();
```
Looking at the above plot, there seems to be an inflection point around day 250. Create a dummy variable to introduce this feature into the linear model.
```
df_first_year.loc[:,'inflection'] = [1 * int(i < 250) for i in df_first_year.DayOfYear]
```
Check the first and last samples to confirm the dummy variable is correct
```
df_first_year.head()
df_first_year.tail()
```
Use a least squares linear regression model and fit the model to the DayOfYear values and the dummy variable to predict TempAvgF
```
# Note the year values need to be provided as an N x 1 array
model = LinearRegression()
model.fit(df_first_year[['DayOfYear', 'inflection']], df_first_year.TempAvgF)
```
Compute the $r^2$ score
```
# Note the year values need to be provided as an N x 1 array
r2 = model.score(df_first_year[['DayOfYear', 'inflection']], df_first_year.TempAvgF)
print(f'r2 score = {r2:0.4f}')
```
Using the DayOfYear values create a set of predictions using the model to construct a trendline
```
trend_y = model.predict(df_first_year[['DayOfYear', 'inflection']].values)
trend_y
```
Plot the trendline against the data and moving average
```
fig = plt.figure(figsize=(10, 7))
ax = fig.add_axes([1, 1, 1, 1]);
# Temp measurements
ax.scatter(df_first_year.DayOfYear, df_first_year.TempAvgF, label='Raw Data');
ax.plot(df_first_year.DayOfYear, rolling, c='r', label=f'{window} day moving average');
ax.plot(df_first_year.DayOfYear, trend_y, c='k', label='Model: Predicted trendline')
ax.set_title('Daily Mean Temperature Measurements')
ax.set_xlabel('Day')
ax.set_ylabel('Temperature (degF)')
ax.set_xticks(range(1, 366), 10)
ax.legend();
```
Do the predictions provided by the trendline look reasonable?
| github_jupyter |
# Sentiment Analytics - Exploratory Data Analysis
# 1/ Import Libraries
```
import itertools
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from collections import Counter
```
# 2/ Load Data
```
from google.colab import drive
drive.mount('/content/drive')
```
# 3/ Get the Data
```
df_sales_main = pd.read_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/Sales Analysis/all_clean_data.csv')
df_sales_main.head()
df_sales_main.shape[0]
```
Set the first column as the Index and print out the table again
Add the name 'ID' for the Index column
```
df_sales_main.index.name = 'ID'
df_sales_main.index.name
df_sales_main.head()
```
From this stage, I will split the tasks into 2 steps: Conversation & Conversation_Information.
# 4/ Check duplicates
Reference:
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.duplicated.html
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html
https://datatofish.com/count-duplicates-pandas/
Now I will create pivot tables based on the order of Fanpage, PSID, Message (this order can be changed at will). For this step, I will then check and drop duplicates in any.
If printing dups, it will shows internal conversations between the Customers and Sales team. So I will print only one first message without any sensitive data or information in it.
```
dups = df_sales_main.pivot_table(columns=['Fanpage', 'PSID', 'Message'], aggfunc='size')
print(dups)
df_sales_main.duplicated(subset=['Fanpage', 'PSID', 'Message'])
df_sales_main.duplicated(subset=['Fanpage', 'PSID', 'Message'], keep='last')
df_sales_main.drop_duplicates(subset=['Fanpage', 'PSID', 'Message'], keep='last')
```
Currently, there are no duplicates in those two data frames. Therefore, dropping duplicates is unnecessary. I just want to show you a way to drop them.
# 5/ Work with Conversation Data Frame
This data frame will contain only some variables as followings.
## 5.1/ Create a new table
```
# Create headers list
headers_Conversation = ['ID', 'Unnamed: 0', 'Fanpage', 'PSID', 'FanpageName', 'CusName', 'Message']
print("Headers of the Data Frame 'Conversation' \n", headers_Conversation)
df_Conversation = df_sales_main.filter(headers_Conversation, axis=1)
df_Conversation.head(10)
df_Conversation = df_Conversation.rename(columns={"Unnamed: 0": "Conversation_ID"})
df_Conversation.head()
```
We need to create 2 new columns for the df_Conversation, named 'Sender' and 'Order' to set Sender 0 as 'Customer', Sender 1 as 'Sales'. The Column 'Order' would be for the order index of each chat line within a conversation.
## 5.3 Split the message of each conversation into multiples rows.
```
%%time
temp_Conversation = df_Conversation['Message'].str.split('\n').apply(pd.Series, 1).stack()
temp_Conversation.head()
temp_Conversation.tail()
temp_Conversation.value_counts()
```
## 5.4 Create a list of indices based on different conversations
```
order_ids = temp_Conversation.index.droplevel(0)
order_ids
print(order_ids)
order_ids.value_counts()
```
## 5.5 Assign those indices to corresponding chat lines
```
temp_Conversation.index = temp_Conversation.index.droplevel(-1)
temp_Conversation.head()
```
## 5.6 Rename the temp_Conversation
```
temp_Conversation.name = 'Message'
temp_Conversation.head()
```
## 5.7 Count number of chat lines of each conversation
```
temp_Conversation.str.len()
```
## 5.8/ Join the temp_Conversation with the Conversation data frame correspondingly
```
df_Conversation.drop(columns=['Message'], inplace=True)
df_Conversation = df_Conversation.join(temp_Conversation)
df_Conversation['Message_ID'] = order_ids
df_Conversation.head()
df_Conversation.head(20)
df_Conversation.tail(20)
df_Conversation.shape[0]
```
## Remove the old Conversation_ID column
```
df_Conversation.drop(columns=['Conversation_ID'], inplace=True)
df_Conversation.head()
```
## Copy the current ID to a new Conversation_ID column
```
df_Conversation['Conversation_ID'] = df_Conversation.index
df_Conversation.head()
```
## Set the Message_ID as the main ID column for df_Conversation
```
df_Conversation.set_index('Message_ID', inplace=True)
df_Conversation.head()
```
## 5.9/ Check null values
```
df_Conversation.isnull().values.any()
df_Conversation.isnull().sum()
df_Conversation.shape[0]
```
There is no null values for the Conversation data frame so I can move on.
## 5.10/ Delete [KH] & [SALES] from chat lines & Convert Sender categories into 0: Sales, 1: Customer
Our approach will be creating a list to contain all values of the column Message. Then I go through each value by for loop to find if each chat line contain [KH] or [SALES] by using conditions. If yes, I wil replace them by "". The reason for this is that going through each row of a certain column in a data frame takes much more time and memory space. List is much easier to use for this case.
```
messages = df_Conversation['Message'].values
temp_array = [0] * len(messages)
for i in range(len(messages)):
if messages[i].startswith('[KH]'):
messages[i] = messages[i].replace('[KH]', "")
temp_array[i] = 1
elif messages[i].startswith('[SALES]'):
messages[i] = messages[i].replace('[SALES]', "")
df_Conversation['Sender'] = temp_array
df_Conversation['Message'] = messages
df_Conversation.head(20)
df_Conversation.tail(20)
```
## 5.11/ Filter the df_Conversation by Customer only (Sender = 1) and having only 3 columns: ID, Conversation_ID, Message, Sender = 1
```
selected_df_Conversation = df_Conversation[["Message", "Sender"]]
selected_df_Conversation.head(10)
selected_df_Conversation.tail(10)
customer_filtered_df_Conversation = selected_df_Conversation[selected_df_Conversation['Sender'] == 1]
customer_filtered_df_Conversation.shape
customer_filtered_df_Conversation.head(50)
customer_filtered_df_Conversation.tail(50)
```
At this stage, I have to check manually the top 50 rows and the last 50 rows of this new dataframe and compare them with rows from the original dataframe of Converastion to make sure Conversation IDs are correct for each chat.
# 6/ Work with Conversation Information data frame
This other new data frame will contain the variables as below.
## 6.1/ Create a new table
```
headers_Conversation_Information = ['ID', 'Unnamed: 0', 'CustomerCount', 'SalesCount', 'StartTime', 'EndTime']
print("Headers of the Data Frame 'Conversation_Information' \n", headers_Conversation_Information)
df_Conversation_Information = df_sales_main.filter(headers_Conversation_Information, axis=1)
df_Conversation_Information.head(5)
df_Conversation_Information = df_Conversation_Information.rename(columns={"Unnamed: 0": "Conversation_ID"})
df_Conversation_Information.head()
df_Conversation_Information.shape[0]
boolean = df_Conversation_Information['Conversation_ID'].duplicated().any()
boolean
df_Conversation_Information.drop_duplicates(subset=['Conversation_ID'])
boolean = df_Conversation_Information['Conversation_ID'].duplicated().any()
boolean
duplicate = df_Conversation_Information[df_Conversation_Information.duplicated()]
print(duplicate)
```
Customer Count seems to be longer as they talked more to ask questions for their requirements.
## 6.2/ Check null values
```
df_Conversation_Information.isnull().values.any()
df_Conversation_Information.isnull().sum()
df_Conversation_Information.shape[0]
```
There is no null values in the Conversation_Information data frame so I can save the data frames as CSV files now.
# 7/ Work with Customer data frame
## 7.1/ Create a new table
```
# Create headers list
headers_Customer = ['PSID', 'CusName']
print("Headers of the Data Frame 'Customer' \n", headers_Customer)
df_Customer = df_sales_main.filter(headers_Customer, axis=1)
df_Customer.head(5)
```
# 8/ Work with Fan Page data frame
## 8.1/ Create a new table
```
# Create headers list
headers_Fan_Page = ['Fanpage', 'FanpageName']
print("Headers of the Data Frame 'Fan Page' \n", headers_Fan_Page)
df_Fan_Page = df_sales_main.filter(headers_Fan_Page, axis=1)
df_Fan_Page.head(5)
```
# 9/ Save data frames into files
```
df_Conversation.to_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/SQL Alchemy/Conversation.csv', encoding='utf-8')
customer_filtered_df_Conversation.to_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/SQL Alchemy/customer_filtered_Conversation.csv', encoding='utf-8')
df_Conversation_Information.to_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/SQL Alchemy/Conversation_Information.csv', encoding='utf-8')
df_Customer.to_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/SQL Alchemy/Customer.csv', encoding='utf-8')
df_Fan_Page.to_csv('/content/drive/My Drive/Colab Notebooks/CoTAI/Data Science Internship CoTAI 2021/SQL Alchemy/Fan_Page.csv', encoding='utf-8')
```
| github_jupyter |
#### TELEPERFORMANCE (TP) - Prueba Técnica - Científico de Datos (Por: Andrés Felipe Escallón Portilla, 22-24 / Feb / 2021)
# CASO DE CONSULTORIA
###### **Requerimiento:**
Una de las operaciones de cobranzas de la compañía quiere generar estrategias diferenciadas para el proceso de gestión de recuperación de cartera de clientes de acuerdo con el riesgo de no pago de la primera factura.
La estrategia se divide en 3 grupos de intervención:
1. Alto riesgo: Llamarlos al 5 día de mora.
2. Medio riesgo: Enviar mensaje de texto al 5 día de mora.
3. Bajo riesgo: Enviar mensaje de texto al día 15 de mora.
Los costos por cada tipo de contacto son los siguientes:
- Llamada asesor de cobranza 1700 pesos
- Mensaje de texto 40 pesos
#### **Instrucciones**
1. Muestre un análisis descriptivo y/o diagnóstico inicial de la información insumo para el modelo.
2. Construya un modelo estadístico que calcule la probabilidad de que un cliente no pague la primera factura. Explique por qué escogió las variables con las que va a trabajar y si debió hacer modificaciones de estas.
3. Defina los puntos de corte que determinen a que grupo de estrategia pertenece cada cliente.
4. Describa el perfil de los clientes con un alto riesgo de no pago.
5. ¿Qué sugerencias haría usted al equipo de cobranzas de acuerdo con el análisis de la información del modelo?
6. Explique el modelo y sustente su validez estadística, así como los puntos de corte, la cantidad de clientes que pertenecen a cada estrategia, los perfiles de riesgo y sus sugerencias y conclusiones.
7. Adjunte la base de datos con la probabilidad de riesgo de cada cliente.
Todos los puntos anteriores deben evidenciarse en un notebook de Python o un Markdown de R que se deben compartir a través de GitHub.
# Solución del Caso:
Responderé a este caso en Español aclarando que lo ideal hubiese sido hacerlo en Inglés (algunos comentarios si están en Inglés).
1. **Muestre un análisis descriptivo y/o diagnóstico inicial de la información insumo para el modelo:**
#### Análisis de Datos Exploratorio (EDA):
```
from google.colab import drive
drive.mount('/content/drive')
!pip install -r '/content/drive/MyDrive/Teleperformance_PruebaTecnica_CientificoDatos/requirements.txt'
# Importing all the required packages
import pandas as pd # The gold standard of Python data analysis, to create and manipulate tables of data
import numpy as np # The Python module for processing arrays which/Pandas is based on
import seaborn as sns; sns.set() # A package to make Matplotlib visualizations more aesthetic
import branca
import geopandas
import matplotlib.pyplot as plt # The gold standard of Python data visualization, but can be complex to use
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from matplotlib.patches import Patch
from matplotlib.widgets import Slider, Button, RadioButtons
import statsmodels.api as sm
import statsmodels.formula.api as sfm
from statsmodels.formula.api import ols
import scipy
from scipy import stats
from scipy import interp
from scipy.optimize import fsolve
from scipy.stats import chi2_contingency, ttest_ind, norm # A module for Python machine learning--we'll stick to T-Tests here
import sklearn
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler, MaxAbsScaler, RobustScaler, StandardScaler
from sklearn.tree import export_graphviz
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve, auc, accuracy_score
from sklearn.model_selection import StratifiedKFold, train_test_split
from statsmodels.formula.api import ols
from IPython.display import display
from IPython.display import display_html
from IPython.display import Image, SVG
import folium # package for making maps, please make sure to use a version older than 1.0.0.
from folium.plugins import TimeSliderChoropleth
# from time_slider_choropleth import TimeSliderChoropleth
import json
import requests
from bs4 import BeautifulSoup
import os
import pydotplus
from io import StringIO
from sympy import var, plot_implicit, Eq
from graphviz import Source
from wordcloud import WordCloud # A package that will allow us to make a wordcloud
# when executing, the plot will be done
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (8,5)
import warnings
warnings.filterwarnings('ignore')
# ignore log(0) and divide by 0 warning
np.seterr(divide='ignore');
#leyendo los archivos de la base de datos (guardados previamente como csv) y asignándolos a dataframes:
df_var=pd.read_csv('/content/drive/MyDrive/Teleperformance_PruebaTecnica_CientificoDatos/data/base_de_datos_prueba_tecnica_csv_vars.csv')
df=pd.read_csv('/content/drive/MyDrive/Teleperformance_PruebaTecnica_CientificoDatos/data/base_de_datos_prueba_tecnica_csv_db.csv')
#visualizando los dataframes:
#Para una base de datos, las variables y su descripción es fundamental
df_var
#visualizando la info completa de la DESCRIPCIÓN:
descripcion=[]
for row in range(len(df_var)):
descripcion.append(df_var['DESCRIPCIÓN'][row])
descripcion
#Esta es la base de datos como tal con la cual se va a trabajar:
pd.options.display.max_columns = None # para visualizar todas las columnas (variables) de interés
df
#Panorama general de los datos:
df.shape # (num_filas, num_cols)
list(df.columns)
```
¡Efectivamente son 22 columnas (variables) que ya se han mostrado anteriormente con su descripción, recordando que la **Variable objetivo** es `Incumplimiento_pago` (donde el 1 indica que el cliente no pago la primera factura)!
```
# Data types and amount of no-null values in dataset
df.info()
```
Los registros de las variables numericas como `antiguedad_meses` y `no_serv_tecnicos` que no tienen información deben ser reemplazados por NaN
**(The most prudent option which will not alter subsequent summary statistics calculations and not skew the distribution of the non-missing data would be to replace all missing values with a standard NaN)**
Sin embargo, los registros sin información de la variable categórica `fallo` pueden reemplazarse por SE ("Sin Especificar"), similarmente a como lo tiene ya establecido la variable categórica `estrato`
```
# Another way to see null values per column
df.isnull().sum()
# Information about numerical columns (descriptive statistics)
df.describe()
```
De la anterior tabla se concluye que **aproximadamente el 14% (exactamnente 14.3165%)** de los clientes (**aprox 2855 de 19942 clientes en total**) está incumpliendo el pago (no ha pagado la primera factura)
```
0.143165*19942
# Information about categorical columns
df.describe(include = ['O'])
df['fallo'].value_counts() #SE= Sin Especificar
```
El fallo reportado mas común es "No navega".
```
#Contemos por ejemplo los fallos asociados a un tipo de tecnología por departamento:
df.groupby(["TECNOL","fallo","DEPARTAMENTO"])["DEPARTAMENTO"].count().reset_index(name="count").sort_values(by="count", ascending = False).reset_index(drop=True)
#Generemos una tabla de contingencia entre las variables categoricas DEPARTAMENTO y fallo (permite una visualización de la distribución geográfica de los fallos):
pd.crosstab(df["DEPARTAMENTO"],df["fallo"])
#numerical variables
num_vars=list(df.describe().columns)
num_vars
#de las num_vars hay que remover cliente_id que no aporta al análisis pues es solo un dato para etiquetar a cada cliente con un número identificador:
num_vars=num_vars.copy()[:-1]
print(num_vars)
#categorical variables
cat_vars=list(df.describe(include = ['O']).columns)
cat_vars
# Unique values per column excluding null
df.nunique()
# Unique values per column including null
df.nunique(dropna=False)
```
### Generando algunas graficas estadísticas de algunas variables independientes (X) frente a la variable objetico o de interés (Y):
```
#plotting the histogram of the numerical variables:
sns.histplot(df[num_vars[10]])
plt.title('Histograma de la variable objetivo (Inclumplimiento_pago: binaria 0 no, 1 si)')
plt.show()
```
Con el anterior histograma se comprueba que el **14.3165% de los clientes (aprox 2855 de 19942 clientes en total** está incumpliendo el pago (no ha pagado la primera factura)
Now that we have looked at the variables of interest in isolation, it makes sense to look at them in relation to `Inclumplimiento_pago`:
```
#Inspecting Inclumplimiento_pago against another variable of interest (e.g antiguedad_meses):
plt.figure(figsize=(50, 10))
sns.boxplot(x = num_vars[0], y=num_vars[10], data = df)
title_string = "Boxplot of " + num_vars[10] + " vs. " + num_vars[0]
plt.ylabel(num_vars[10])
plt.title(title_string)
# We can look out for trends using a line plot
plt.figure(figsize=(15, 8))
ax = sns.lineplot(
x=num_vars[0],
y=num_vars[10],
data=df,
)
plt.figure(figsize=(15, 8))
ax = sns.lineplot(
x=cat_vars[6],
y=num_vars[10],
data=df,
)
plt.figure(figsize=(15, 8))
ax = sns.lineplot(
x=cat_vars[8],
y=num_vars[10],
data=df,
)
```
## Pre-processing our data
Now that we have an idea of what our dataset consists of, let's transform it so that we can display phase. The types of activities we may engage in during **pre-processing** include:
1. **Deleting columns**
2. **Enriching (or Transforming)** a data set, adding newly calculated columns in the indices
3. **Filtering** a subset of the rows or columns of a dataset according to some criteria
4. **Indexing** a dataset
5. **Aggregating** data
6. **Sorting** the rows of a data set according to some criteria
7. **Merging** the data
8. **Pivoting** so that data that was originally laid out vertically is laid out horizontally (increasing the number of columns) or vice versa (increasing the number of rows)
among others.
## What is data transformation?
Many times in real life, you will be working with imperfect datasets with quality issues. **Data transformation** is the process of modifying a dataset in appropriate ways in order to eliminate these quality issues. Some of these activities include:
- Splitting columns
- Converting dates to `datetime` objects, which are far more easily manipulable using `pandas` libraries
- Encoding categorical variables
- Dealing with and replacing null or missing values
- Creating unique identifiers
The `pandas` library has many functions which can help with this task. In addition, you will also be using some other standard libraries like `String`, `base64`, and `sklearn`.
```
#Let's create a copy of our dataframe before we start changing it so we can refer back to the original values if necessary.
df_orig = df.copy()
df_orig.head(1)
```
Los registros de las variables numericas como `antiguedad_meses` y `no_serv_tecnicos` que no tienen información deben ser reemplazados por NaN
**(The most prudent option which will not alter subsequent summary statistics calculations and not skew the distribution of the non-missing data would be to replace all missing values with a standard NaN)**
Sin embargo, los registros sin información de la variable categórica `fallo` pueden reemplazarse por SE ("Sin Especificar"), similarmente a como lo tiene ya establecido la variable categórica `estrato`
```
df['antiguedad_meses'].fillna(np.nan, inplace=True)
df['no_serv_tecnicos'].fillna(np.nan, inplace=True)
df['fallo'].fillna('SE', inplace=True) #SE=Sin Especificar (similar a cuando no hay info de Estrato)
df.isnull().sum()
```
Una vez finalizado el tratamiento de valores faltantes y/o nulos, procedemos con los siguientes pasos del EDA:
Exploremos las **correlaciones** de las variables numericas con la variable objetivo (num_vars[10]=`Incumplimiento_pago`):
```
# Create a correlation matrix
corr = df[num_vars].corr()
pos_cor = corr[num_vars[10]] >0
neg_cor = corr[num_vars[10]] <0
corr[num_vars[10]][pos_cor].sort_values(ascending = False)
#This prints out the coefficients that are positively correlated with Incumplimiento_pago:
corr[num_vars[10]][neg_cor].sort_values(ascending = False)
#This prints out the coefficients that are negatively correlated with Incumplimiento_pago:
```
De las resultados anteriores se concluye que:
- Hay **mas** incumplimiento de pago cuando los clientes llaman por otros motivos y hay quejas de fraude
- Existe **menos** incumplimiento de pago a medida que aumenta la antiguedad en meses
```
# subdividiendo la columna producto en tres para trabajar los servicios de forma diferenciada:
df[['productoTO','productoTV','productoBA']] = df.productos.str.split('+',expand=True,)
df[['productoTO','productoTV','productoBA']]
df['productoTO'][19939]
df.loc[19939,'productoTO']='valor'
df['productoTO'][19939]
```
Con lo anterior, se procede a hacer un proceso similar a una **codificación one-hot** de las tres variables categóricas (`productoTO`,`productoTV`,`productoBA`) que se extrajeron de la columna `productos`. Solo queda repartir el valor adecuado en la correspondiente columna (TO, TV, BA) y asignar para cada columna ya organizada, el valor de 1 en caso de que haya producto y 0 cuando no haya:
```
#usando unas columnas auxiliares en primera instancia:
df['O']=''
df['V']=''
df['A']=''
#organizando los productos así: TO, TV, BA (tipo one-hot-encoding)
cols=['productoTO','productoTV','productoBA']
for row in range(0, len(df)):
for col in cols:
if df[col][row]=='TO':
df.loc[row,'O']=1
elif df[col][row]=='TV':
df.loc[row,'V']=1
elif df[col][row]=='BA':
df.loc[row,'A']=1
df[['O','V','A']]
df[['productoTO','productoTV','productoBA']] = df[['O','V','A']]
df[['productoTO','productoTV','productoBA']]
df['productoTO'][0]
#ya tenemos los valores de 1, ahora falta poner 0 donde esté vacío (''):
cols=['productoTO','productoTV','productoBA']
for row in range(0, len(df)):
for col in cols:
if df[col][row]=='':
df.loc[row,col]=0
df[['productoTO','productoTV','productoBA']] #esta es la versión final tipo one-hot-encoding
df.head(1)
df.columns
new_cols=['REGIONAL', 'DEPARTAMENTO', 'TECNOL', 'GERENCIA',
'CANAL_HOMOLOGADO_MILLICON', 'tipo_fuerza_venta', 'estrato',
'antiguedad_meses', 'productos', 'portafolio', 'no_serv_tecnicos',
'fallo', 'asesoria_factura', 'pedidos_peticiones', 'reagendamiento',
'asesoria_servicios', 'retencion', 'Otras', 'quejas_fraude', 'traslado',
'Incumplimiento_pago', 'cliente_id', 'productoTO', 'productoTV',
'productoBA']
df=df[new_cols].copy() #finalmente nos quedamos con las columnas necesarias para el posterior análisis sin tener en cuenta las auxiliares (repetidas)
df.head(1)
print(num_vars)
print(cat_vars)
#redefiniendo las nuevas vars categoricas (no considerar productos como un todo sino las tres tipo one-hot encoding)
new_cat_vars=['REGIONAL', 'DEPARTAMENTO', 'TECNOL', 'GERENCIA', 'CANAL_HOMOLOGADO_MILLICON', 'tipo_fuerza_venta', 'estrato', 'portafolio', 'fallo', 'productoTO', 'productoTV', 'productoBA']
print(new_cat_vars)
```
# Observación:
Para mi caso personal, por motivos de tiempo limitado y dado que el tratamiento de la información se está haciendo con las herramientas proporcionadas por Python (pandas, numpy, etc.) en este cuaderno de Jupyter y usando los recursos de Google Collab, no es necesario que la base de datos quede por ejemplo con todas sus variables y valores en minúsculas, sin tildes, sin caracterés especiales, etc, ya que no se va a usar SQL (tampoco se está trabajando en este caso con procesamiento de texto - si dispongo de tiempo, aunque lo dudo, trabajaré en NLP con Tweets de Teleperformance). Sin embargo, es deseable e importante que la base de datos quede de forma apropiada para poner los modelos de ML en producción con el fin de que consuma info de la base de datos y finalmente se muestren los resultados en un Front End específico como Dash o PowerBI, entre otros.
2. **Construya un modelo estadístico que calcule la probabilidad de que un cliente no pague la primera factura. Explique por qué escogió las variables con las que va a trabajar y si debió hacer modificaciones de estas.**
Teniendo en cuenta los resultados del análisis anterior (punto 1), procederé a construir el modelo estadístico que será **Regresión Logística**.
Para referencia, en estos dos trabajos hay información interesante y pernitente para este caso:
[1]https://bibdigital.epn.edu.ec/bitstream/15000/9194/3/CD-6105.pdf
[2]https://repository.eafit.edu.co/bitstream/handle/10784/12870/Adriana_SalamancaArias_JohnAlejandro_BenitezUrrea_2018.pdf?sequence=2
Según [2], *"los modelos logísticos son apropiados para medir la probabilidad de incumplimiento que enfrentan las empresas del sector real, al tener en cuenta su versatilidad para determinar rangos múltiples de la variable dependiente de manera ordenada, porque trabaja con distribución probabilística que permite que con poca información se puedan obtener resultados interesantes con respecto a la probabilidad de incumplimiento"*
```
df.index
100*(13033/19942)
#para facilitar el análisis, no consideraré la columna no_serv_tecnicos debido a la poca información que tiene (solo el 35% puesto que el 65% son NULL) reemplazando los NaN por su valor promedio en 'antiguedad_meses'
df['antiguedad_meses'].fillna(df['antiguedad_meses'].mean(), inplace=True)
df['no_serv_tecnicos'].fillna(df['no_serv_tecnicos'].mean(), inplace=True) #lo hago solo por llenarla de la misma manera pero esa columna no se considerará
df.info()
df['estrato'].head(20)
#similarmente en 'estrato' cambio 'SE' por 0 (para luego hacerlo por la parte entera de su valor promedio):
for i in df.index:
if df['estrato'][i]=='SE':
df['estrato'][i]=0
df['estrato'].head(20)
df['estrato']=pd.to_numeric(df['estrato'])
df.info()
int(df['estrato'].mean())
#similarmente en 'estrato' cambio 'SE' por la parte entera de su valor promedio:
for i in df.index:
if df['estrato'][i]==0:
df['estrato'][i]=int(df['estrato'].mean())
df['estrato'].head(20)
#recordemos el conteo de la var objetivo:
sns.countplot(x='Incumplimiento_pago', data = df)
plt.title("Incumplimiento_pago")
#Relación: Incumplimiento_pago (Y) vs antiguedad_meses(X)
sns.jointplot(df.antiguedad_meses, df.Incumplimiento_pago, kind="hex")
print(num_vars)
#Y='Incumplimiento_pago' (var DEPENDIENTE)
#redefiniendo las var INDEPENDIENTES verdaderamente numericas (porque las binarias serían categóricas) y que serán insumo para el modelo:
num_vars_def=['antiguedad_meses', 'estrato'] #SOLO QUEDAN 2
print(new_cat_vars)
```
NOTA:
Por simplicidad, las siguientes var cat no se consideran relevantes para el modelo:
'REGIONAL', 'DEPARTAMENTO', 'TECNOL', 'GERENCIA', 'CANAL_HOMOLOGADO_MILLICON', 'tipo_fuerza_venta', 'portafolio' (está correlacionada con 'productos' que tampoco se considera pues se separó en tres var cat binarias que si se van a considerar en su reemplazo)
```
#en consecuencia, redefiniendo las var verdaderamente categóricas y que serán insumo para el modelo:
cat_vars_def=['productoTO', 'productoTV', 'productoBA', 'asesoria_factura', 'pedidos_peticiones', 'reagendamiento', 'asesoria_servicios', 'retencion', 'Otras', 'quejas_fraude', 'traslado']#QUEDAN 11
```
Veamos la matriz de correlación entre las var numericas definitivas (covariables) y la variable objetivo:
```
num_vars_def=['antiguedad_meses', 'estrato']
cols=num_vars_def.copy()
cols.append('Incumplimiento_pago')
print(cols)
#Visualize the correlation matrix across all numerical features by using the sns.heatmap() command:
#compute correlation matrix
Data=df[cols].copy()
df_correlations = Data.corr()
#mask the upper half for visualization purposes
mask = np.zeros_like(df_correlations, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Draw the heatmap with the mask and correct aspect ratio
plt.figure(figsize= (10,10))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(df_correlations,mask=mask, vmax=1, vmin=-1, cmap=cmap,
center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5});
```
Construyendo el modelo predictivo:
...iniciando con un modelo de regresión logística estándard...
Using the `LogisticRegression()` function from `scikit-learn`, let me write a function named `fit_logistic_regression(X,y)` that fits a logistic regression on the array of covariates `X` and associated response variable `y`.
```
from sklearn.linear_model import LogisticRegression
def fit_logistic_regression(X,y):
"""
fit a logistic regression with feature matrix X and binary output y
"""
clf = LogisticRegression(solver='lbfgs', tol=10**-4,
fit_intercept=True,
multi_class='multinomial').fit(X,y)
return clf
```
Let me create a basic [logistic regression model](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc) for predicting `Incumplimiento_pago` with only one feature: `antiguedad_meses`. I will call this model `model1`, using a 70/30 train-test split of the data.
```
# we will use a 70%/30% split for training/validation
Data=df.copy()
n_total = len(Data)
n_train = int(0.7*n_total)
X, y = Data[["antiguedad_meses"]], Data.Incumplimiento_pago
X_train, y_train = X[:n_train], y[:n_train]
X_test, y_test = X[n_train:], y[n_train:]
model1 = fit_logistic_regression(X_train, y_train) # fit a logistic regression
y_test_pred = model1.predict_proba(X_test)[:,1] # make probabilistic predictions on test set
```
Plotting the [ROC curve](https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5) of `model1` and finding the area under the curve:
```
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold
fpr, tpr, _ = roc_curve(y_test, y_test_pred) #compute FPR/TPR
auc_baseline = auc(fpr, tpr) # compute AUC
plt.plot(fpr, tpr, "b-", label="AUC(basline)={:2.2f}".format(auc_baseline))
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.legend(fontsize=15)
plt.plot([0,1], [0,1], "r--")
plt.title("ROC curve -- Baseline Model");
```
Of course this should not be the final model. This is because I have not explored the contribution from other variables, which in addition to containing valuable information could also be confounding the perceived effect of `antiguedad_meses` on the response variable `Incumplimiento_pago`. This under-exploitation of information is called [**underfitting**](https://towardsdatascience.com/what-are-overfitting-and-underfitting-in-machine-learning-a96b30864690).
On the other hand:
Let's instead put all the variables available in the model, so that we are maximally leveraging our available info. This is also a bad idea. If we *blindly* use all of the variables in our model fitting, a phenomenon called [**overfitting**](https://towardsdatascience.com/what-are-overfitting-and-underfitting-in-machine-learning-a96b30864690) occurs. This is when a statistical model "fits" too closely to a particular set of data, which may well be noisy and exhibit randomness and therefore fail to predict future, different observations reliably.
In most cases, you will be working with datasets with many features that each have their own distribution. Generally, a large amount of time is spent on feature selection with many models being trained during this time. It is extremely rare that you simply plug all the features in and tune it once to get the optimal model.
There are many different techniques associated with feature selection and a comprehensive look into all of them is outside the scope of this case. For simplicity, I will demonstrate model training and testing on single-feature models and then directly move into multi-feature models to show the numerous possible cases you may encounter.
In reality, I would apply cross-validation on numerous subsets of features based on domain knowledge of the dataset to see which set of features truly optimizes the model I am trying to create.
[**Cross-validation**](https://towardsdatascience.com/why-and-how-to-cross-validate-a-model-d6424b45261f) is a set of techniques for assessing how well the results of a model will generalize to an out-of-sample dataset; i.e. in practice or production. It is chiefly used to flag overfitting.
```
skf = StratifiedKFold(n_splits=5)
for k, (train_index, test_index) in enumerate( skf.split(X, y) ):
plt.plot(train_index, [k+1 for _ in train_index], ".")
plt.ylim(0,6)
plt.ylabel("FOLD")
plt.title("CROSS VALIDATION FOLDS")
```
The following code defines a function `compute_AUC(X, y, train_index, test_index)` that computes the AUC of a model trained on "train_index" and tested in "test_index".
```
def compute_AUC(X, y, train_index, test_index):
"""
feature/output: X, y
dataset split: train_index, test_index
"""
X_train, y_train = X.iloc[train_index], y.iloc[train_index]
X_test, y_test = X.iloc[test_index], y.iloc[test_index]
clf = fit_logistic_regression(X_train, y_train)
default_proba_test = clf.predict_proba(X_test)[:,1]
fpr, tpr, _ = roc_curve(y_test, default_proba_test)
auc_score = auc(fpr, tpr)
return auc_score, fpr, tpr
```
With the help of the `compute_AUC` function defined above, let me write a function `cross_validation_AUC(X,y,nfold)` that carries out a 10-fold cross-validation and returns a list which contains the area under the curve for each fold of the cross-validation:
```
def cross_validation_AUC(X,y, nfold=10):
"""
use a n-fold cross-validation for computing AUC estimates
"""
skf = StratifiedKFold(n_splits=nfold) #create a cross-validation splitting
auc_list = [] #this list will contain the AUC estimates associated with each fold
for k, (train_index, test_index) in enumerate( skf.split(X, y) ):
auc_score, _, _ = compute_AUC(X, y, train_index, test_index)
auc_list.append(auc_score)
return auc_list
```
I will now estimate and compare, through cross-validation analysis, the performance of all the "simple models" that only use one numerical features as input.
```
print(num_vars_def)
```
Let's compute cross-validation estimates of the AUC for each single-feature model:
```
model_perf = pd.DataFrame({}) #this data-frame will contain the AUC estimates
for key in num_vars_def:
X_full, y_full = Data[[key]], Data.Incumplimiento_pago
auc_list = cross_validation_AUC(X_full, y_full, nfold=10)
model_perf["SIMPLE:" + key] = auc_list
```
Let me construct a [boxplot](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51) which shows the distribution of cross-validation scores of each variable (remember, each variable has 10 total scores):
```
def plot_boxplot_ordered(df_model):
"""
display a list of boxplot, ordered by the media values
"""
df = df_model[df_model.median().sort_values().index]
sns.boxplot(x="variable", y="value", data=pd.melt(df), showfliers=False)
plt.xticks(rotation=90)
plt.figure(figsize= (10,5))
plot_boxplot_ordered(model_perf)
plt.xlabel("Predictive Model with a Single Predictive Feature")
plt.ylabel("AUC")
```
According to what have been done so far, from the above picture I can conclude:
- The feature that has the highest predictive power is `antiguedad_meses`
- The feature that has the lowest predictive power is `estrato`
Let me consider the model that consists of using *all* the numerical features (and none of the categorical features). Carrying out a 10-fold cross-validation analysis to determine whether this model has better predictive performance than the best single-feature model. Using the boxplot method again as I did before:
```
X_full, y_full = Data[num_vars_def], Data.Incumplimiento_pago
auc_list = cross_validation_AUC(X_full, y_full)
model_perf["ALL_NUMERICAL"] = auc_list
model_perf
plt.figure(figsize= (10,5))
plot_boxplot_ordered(model_perf)
plt.xlabel("Predictive Model with a Single Predictive Feature")
plt.ylabel("AUC")
```
I see that the combined model does perform better than the best single-feature model. Thus, I will move forward with it for the rest of this case. Note, however, that best practice would entail iteratively adding features to the best single-feature model until I reach a point where there is no significant improvement, as opposed to throwing all the features in at once. Let me advise and consider to take this more cautious approach when building models (it is a matter of time)
## Incorporating categorical variables:
```
cat_vars_def # se podría decir que estas vars ya están codificadas tipo one-hot para irse añadiendo al modelo numérico anterior
Data.value_counts()
sns.countplot(x='productoTO', data = Data)
plt.xticks(rotation=90)
```
Let me investigate whether the categorical variable `productoTO` brings any predictive value when added to the current best model (remember again that the encoding scheme of one-hot type is already there:
```
plt.figure(figsize= (20,5))
df_TO_incump = Data[["Incumplimiento_pago", "productoTO"]].groupby("productoTO").mean()
df_TO_incump = df_TO_incump.sort_values(by="Incumplimiento_pago",axis=0, ascending=False)
sns.barplot(x=df_TO_incump.index[:50],
y=df_TO_incump["Incumplimiento_pago"][:50].values,
orient="v")
plt.xticks(rotation=90)
plt.ylabel("Probabilidad de Incumplimiento_pago")
plt.title("Incumplimiento_pago por productoTO", fontsize=20, verticalalignment='bottom')
print(num_vars_def)
new_cols=['antiguedad_meses', 'estrato','productoTO','productoTV','productoBA']
X_full_productos, y_full = Data[new_cols], Data.Incumplimiento_pago
auc_list = cross_validation_AUC(X_full_productos, y_full)
model_perf["ALL_NUMERICAL_WITH_productos"] = auc_list
model_perf
plt.figure(figsize= (10,5))
plot_boxplot_ordered(model_perf)
plt.xlabel("Predictive Model with a Single Predictive Feature")
plt.ylabel("AUC")
```
The difference appears significant as the boxplot for the updated model is almost completely non-overlapping with that of the previous model.
#To finish:
Let me use the rest of the "cat_vars" (they are indeed numerical: binary):
```
print(cat_vars_def)
pqrs=['asesoria_factura', 'pedidos_peticiones', 'reagendamiento', 'asesoria_servicios', 'retencion', 'Otras', 'quejas_fraude', 'traslado']
new_cols_def=['antiguedad_meses', 'estrato','productoTO','productoTV','productoBA', 'asesoria_factura', 'pedidos_peticiones', 'reagendamiento', 'asesoria_servicios', 'retencion', 'Otras', 'quejas_fraude', 'traslado']
X_full_productos_pqrs, y_full = Data[new_cols_def], Data.Incumplimiento_pago
auc_list = cross_validation_AUC(X_full_productos_pqrs, y_full)
model_perf["ALL_NUMERICAL_WITH_productos_pqrs"] = auc_list
model_perf
plt.figure(figsize= (10,5))
plot_boxplot_ordered(model_perf)
plt.xlabel("Predictive Model with a Single Predictive Feature")
plt.ylabel("AUC")
```
## Conclusions
Once I started building models, I started with very simple logistic regressions approaches – these baseline models were useful for quickly evaluating the predictive power of each individual variable. Next, I employed cross-validation approaches for building more complex models, often exploiting the interactions between the different features. Since the dataset contains a large number of covariates, using cross-validation was revealed to be crucial for avoiding overfitting, choosing the correct number of features and ultimately choosing an appropriate model that balanced complexity with accuracy.
Cross-validation is a robust and flexible technique for evaluating the predictive performance of statistical models. It is especially useful in big data settings where the number of features is large compared to the number of observations. When used appropriately, cross-validation is a powerful method for choosing a model with the correct complexity and best predictive performance. Remember that logistic regression is only one of many classification algorithms and the principles behind cross-validation are not limited to this case alone.
*Quiero finalmente comentar que 2 días para mi no son suficientes para terminar completa y exitosamente este caso. Con mas tiempo, seguro lo terminaría, aunque voy a seguir trabajando en el caso apesar de que el tiempo para entregarlo ya se terminó. Muchas gracias por la oprtunidad y también deseo seguir trabajando en el caso opcional donde tenía pensado trabajar con NLP (Tweets-TP)*
**Observación:**
Todo lo trabajado aquí está referenciado al *programa de Ciencia de Datos (DS4A-Colombia 3.0) ofertado por el Ministerio TIC de Colombia en convenio con la Compañía Correlation One.*
Todo el material puede ser encontrado aquí:
https://drive.google.com/drive/folders/1mGiM3lWtdkszSIrv-wpJjftZ_2qWbMNt?usp=sharing
| github_jupyter |
# An example of many-to-one (sequence classification)
Original experiment from [Hochreiter & Schmidhuber (1997)](www.bioinf.jku.at/publications/older/2604.pdf).
The goal is to classify sequences.
Elements and targets are represented locally (input vectors with only one non-zero bit).
The sequence starts with an `B`, ends with a `E` (the “trigger symbol”), and otherwise consists of randomly chosen symbols from the set `{a, b, c, d}` except for two elements at positions `t1` and `t2` that are either `X` or `Y`.
For the `DifficultyLevel.HARD` case, the sequence length is randomly chosen between `100` and `110`, `t1` is randomly chosen between `10` and `20`, and `t2` is randomly chosen between `50` and `60`.
There are `4` sequence classes `Q`, `R`, `S`, and `U`, which depend on the temporal order of `X` and `Y`.
The rules are:
```
X, X -> Q,
X, Y -> R,
Y, X -> S,
Y, Y -> U.
```
## 1. Dataset Exploration
```
from res.sequential_tasks import TemporalOrderExp6aSequence as QRSU
# Create a data generator
example_generator = QRSU.get_predefined_generator(
difficulty_level=QRSU.DifficultyLevel.EASY,
batch_size=32,
)
example_batch = example_generator[1]
print(f'The return type is a {type(example_batch)} with length {len(example_batch)}.')
print(f'The first item in the tuple is the batch of sequences with shape {example_batch[0].shape}.')
print(f'The first element in the batch of sequences is:\n {example_batch[0][0, :, :]}')
print(f'The second item in the tuple is the corresponding batch of class labels with shape {example_batch[1].shape}.')
print(f'The first element in the batch of class labels is:\n {example_batch[1][0, :]}')
# Decoding the first sequence
sequence_decoded = example_generator.decode_x(example_batch[0][0, :, :])
print(f'The sequence is: {sequence_decoded}')
# Decoding the class label of the first sequence
class_label_decoded = example_generator.decode_y(example_batch[1][0])
print(f'The class label is: {class_label_decoded}')
```
## 2. Defining the Model
```
import torch
import torch.nn as nn
# Set the random seed for reproducible results
torch.manual_seed(1)
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
# This just calls the base class constructor
super().__init__()
# Neural network layers assigned as attributes of a Module subclass
# have their parameters registered for training automatically.
self.rnn = torch.nn.RNN(input_size, hidden_size, nonlinearity='relu', batch_first=True)
self.linear = torch.nn.Linear(hidden_size, output_size)
def forward(self, x):
# The RNN also returns its hidden state but we don't use it.
# While the RNN can also take a hidden state as input, the RNN
# gets passed a hidden state initialized with zeros by default.
h = self.rnn(x)[0]
x = self.linear(h)
return x
class SimpleLSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.lstm = torch.nn.LSTM(input_size, hidden_size, batch_first=True)
self.linear = torch.nn.Linear(hidden_size, output_size)
def forward(self, x):
h = self.lstm(x)[0]
x = self.linear(h)
return x
def get_states_across_time(self, x):
h_c = None
h_list, c_list = list(), list()
with torch.no_grad():
for t in range(x.size(1)):
h_c = self.lstm(x[:, [t], :], h_c)[1]
h_list.append(h_c[0])
c_list.append(h_c[1])
h = torch.cat(h_list)
c = torch.cat(c_list)
return h, c
```
## 3. Defining the Training Loop
```
def train(model, train_data_gen, criterion, optimizer, device):
# Set the model to training mode. This will turn on layers that would
# otherwise behave differently during evaluation, such as dropout.
model.train()
# Store the number of sequences that were classified correctly
num_correct = 0
# Iterate over every batch of sequences. Note that the length of a data generator
# is defined as the number of batches required to produce a total of roughly 1000
# sequences given a batch size.
for batch_idx in range(len(train_data_gen)):
# Request a batch of sequences and class labels, convert them into tensors
# of the correct type, and then send them to the appropriate device.
data, target = train_data_gen[batch_idx]
data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)
# Perform the forward pass of the model
output = model(data) # Step ①
# Pick only the output corresponding to last sequence element (input is pre padded)
output = output[:, -1, :]
# Compute the value of the loss for this batch. For loss functions like CrossEntropyLoss,
# the second argument is actually expected to be a tensor of class indices rather than
# one-hot encoded class labels. One approach is to take advantage of the one-hot encoding
# of the target and call argmax along its second dimension to create a tensor of shape
# (batch_size) containing the index of the class label that was hot for each sequence.
target = target.argmax(dim=1)
loss = criterion(output, target) # Step ②
# Clear the gradient buffers of the optimized parameters.
# Otherwise, gradients from the previous batch would be accumulated.
optimizer.zero_grad() # Step ③
loss.backward() # Step ④
optimizer.step() # Step ⑤
y_pred = output.argmax(dim=1)
num_correct += (y_pred == target).sum().item()
return num_correct, loss.item()
```
## 4. Defining the Testing Loop
```
def test(model, test_data_gen, criterion, device):
# Set the model to evaluation mode. This will turn off layers that would
# otherwise behave differently during training, such as dropout.
model.eval()
# Store the number of sequences that were classified correctly
num_correct = 0
# A context manager is used to disable gradient calculations during inference
# to reduce memory usage, as we typically don't need the gradients at this point.
with torch.no_grad():
for batch_idx in range(len(test_data_gen)):
data, target = test_data_gen[batch_idx]
data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)
output = model(data)
# Pick only the output corresponding to last sequence element (input is pre padded)
output = output[:, -1, :]
target = target.argmax(dim=1)
loss = criterion(output, target)
y_pred = output.argmax(dim=1)
num_correct += (y_pred == target).sum().item()
return num_correct, loss.item()
```
## 5. Putting it All Together
```
import matplotlib.pyplot as plt
from res.plot_lib import set_default, plot_state, print_colourbar
set_default()
def train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=True):
# Automatically determine the device that PyTorch should use for computation
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Move model to the device which will be used for train and test
model.to(device)
# Track the value of the loss function and model accuracy across epochs
history_train = {'loss': [], 'acc': []}
history_test = {'loss': [], 'acc': []}
for epoch in range(max_epochs):
# Run the training loop and calculate the accuracy.
# Remember that the length of a data generator is the number of batches,
# so we multiply it by the batch size to recover the total number of sequences.
num_correct, loss = train(model, train_data_gen, criterion, optimizer, device)
accuracy = float(num_correct) / (len(train_data_gen) * train_data_gen.batch_size) * 100
history_train['loss'].append(loss)
history_train['acc'].append(accuracy)
# Do the same for the testing loop
num_correct, loss = test(model, test_data_gen, criterion, device)
accuracy = float(num_correct) / (len(test_data_gen) * test_data_gen.batch_size) * 100
history_test['loss'].append(loss)
history_test['acc'].append(accuracy)
if verbose or epoch + 1 == max_epochs:
print(f'[Epoch {epoch + 1}/{max_epochs}]'
f" loss: {history_train['loss'][-1]:.4f}, acc: {history_train['acc'][-1]:2.2f}%"
f" - test_loss: {history_test['loss'][-1]:.4f}, test_acc: {history_test['acc'][-1]:2.2f}%")
# Generate diagnostic plots for the loss and accuracy
fig, axes = plt.subplots(ncols=2, figsize=(9, 4.5))
for ax, metric in zip(axes, ['loss', 'acc']):
ax.plot(history_train[metric])
ax.plot(history_test[metric])
ax.set_xlabel('epoch', fontsize=12)
ax.set_ylabel(metric, fontsize=12)
ax.legend(['Train', 'Test'], loc='best')
plt.show()
return model
```
## 5. Simple RNN: 10 Epochs
```
# Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleRNN(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 10
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs)
for parameter_group in list(model.parameters()):
print(parameter_group.size())
```
## 5. Simple LSTM: 10 Epochs
```
# Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleLSTM(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 10
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs)
for parameter_group in list(model.parameters()):
print(parameter_group.size())
```
## 6. RNN: Increasing Epoch to 100
```
# Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleRNN(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 100
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=False)
```
## LSTM: Increasing Epoch to 100
```
# Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleLSTM(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 100
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=False)
```
## 7. Model Evaluation
```
import collections
import random
def evaluate_model(model, difficulty, seed=9001, verbose=False):
# Define a dictionary that maps class indices to labels
class_idx_to_label = {0: 'Q', 1: 'R', 2: 'S', 3: 'U'}
# Create a new data generator
data_generator = QRSU.get_predefined_generator(difficulty, seed=seed)
# Track the number of times a class appears
count_classes = collections.Counter()
# Keep correctly classified and misclassified sequences, and their
# true and predicted class labels, for diagnostic information.
correct = []
incorrect = []
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model.eval()
with torch.no_grad():
for batch_idx in range(len(data_generator)):
data, target = test_data_gen[batch_idx]
data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)
data_decoded = data_generator.decode_x_batch(data.cpu().numpy())
target_decoded = data_generator.decode_y_batch(target.cpu().numpy())
output = model(data)
sequence_end = torch.tensor([len(sequence) for sequence in data_decoded]) - 1
output = output[torch.arange(data.shape[0]).long(), sequence_end, :]
target = target.argmax(dim=1)
y_pred = output.argmax(dim=1)
y_pred_decoded = [class_idx_to_label[y.item()] for y in y_pred]
count_classes.update(target_decoded)
for i, (truth, prediction) in enumerate(zip(target_decoded, y_pred_decoded)):
if truth == prediction:
correct.append((data_decoded[i], truth, prediction))
else:
incorrect.append((data_decoded[i], truth, prediction))
num_sequences = sum(count_classes.values())
accuracy = float(len(correct)) / num_sequences * 100
print(f'The accuracy of the model is measured to be {accuracy:.2f}%.\n')
# Report the accuracy by class
for label in sorted(count_classes):
num_correct = sum(1 for _, truth, _ in correct if truth == label)
print(f'{label}: {num_correct} / {count_classes[label]} correct')
# Report some random sequences for examination
print('\nHere are some example sequences:')
for i in range(10):
sequence, truth, prediction = correct[random.randrange(0, 10)]
print(f'{sequence} -> {truth} was labelled {prediction}')
# Report misclassified sequences for investigation
if incorrect and verbose:
print('\nThe following sequences were misclassified:')
for sequence, truth, prediction in incorrect:
print(f'{sequence} -> {truth} was labelled {prediction}')
else:
print('\nThere were no misclassified sequences.')
evaluate_model(model, difficulty)
```
## 8. Visualize LSTM
Setting difficulty to `MODERATE` and `hidden_size` to 12.
```
# For reproducibility
torch.manual_seed(1)
# Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.MODERATE
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 10
output_size = train_data_gen.n_classes
model = SimpleLSTM(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 100
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=False)
# Get hidden (H) and cell (C) batch state given a batch input (X)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model.eval()
with torch.no_grad():
data = test_data_gen[0][0]
X = torch.from_numpy(data).float().to(device)
H_t, C_t = model.get_states_across_time(X)
print("Color range is as follows:")
print_colourbar()
plot_state(X.cpu(), C_t, b=9, decoder=test_data_gen.decode_x) # 3, 6, 9
plot_state(X.cpu(), H_t, b=9, decoder=test_data_gen.decode_x)
```
| github_jupyter |
# Unsupervised Graph Learning with GraphSage
GraphScope provides the capability to process learning tasks. In this tutorial, we demonstrate how GraphScope trains a model with GraphSage.
The task is link prediction, which estimates the probability of links between nodes in a graph.
In this task, we use our implementation of GraphSAGE algorithm to build a model that predicts protein-protein links in the [PPI](https://humgenomics.biomedcentral.com/articles/10.1186/1479-7364-3-3-291) dataset. In which every node represents a protein. The task can be treated as a unsupervised link prediction on a homogeneous link network.
In this task, GraphSage algorithm would compress both structural and attribute information in the graph into low-dimensional embedding vectors on each node. These embeddings can be further used to predict links between nodes.
This tutorial has following steps:
- Launching the learning engine and attaching to loaded graph.
- Defining train process with builtin GraphSage model and hyper-parameters
- Training and evaluating
```
# Install graphscope package if you are NOT in the Playground
!pip3 install graphscope
!pip3 uninstall -y importlib_metadata # Address an module conflict issue on colab.google. Remove this line if you are not on colab.
# Import the graphscope module.
import graphscope
graphscope.set_option(show_log=False) # enable logging
# Load ppi dataset
from graphscope.dataset import load_ppi
graph = load_ppi()
```
## Launch learning engine
Then, we need to define a feature list for training. The training feature list should be seleted from the vertex properties. In this case, we choose all the properties prefix with "feat-" as the training features.
With the featrue list, next we launch a learning engine with the [graphlearn](https://graphscope.io/docs/reference/session.html#graphscope.Session.graphlearn) method of graphscope.
In this case, we specify the GCN training over "protein" nodes and "link" edges.
With gen_labels, we take protein nodes as training set.
```
# define the features for learning
paper_features = []
for i in range(50):
paper_features.append("feat-" + str(i))
# launch a learning engine.
lg = graphscope.graphlearn(
graph,
nodes=[("protein", paper_features)],
edges=[("protein", "link", "protein")],
gen_labels=[
("train", "protein", 100, (0, 100)),
],
)
```
We use the builtin GraphSage model to define the training process.You can find more detail about all the builtin learning models on [Graph Learning Model](https://graphscope.io/docs/learning_engine.html#data-model)
In the example, we use tensorflow as "NN" backend trainer.
```
import numpy as np
from graphscope.learning.examples import GraphSage
from graphscope.learning.graphlearn.python.model.tf.optimizer import get_tf_optimizer
from graphscope.learning.graphlearn.python.model.tf.trainer import LocalTFTrainer
# unsupervised GraphSage.
def train(config, graph):
def model_fn():
return GraphSage(
graph,
config["class_num"],
config["features_num"],
config["batch_size"],
categorical_attrs_desc=config["categorical_attrs_desc"],
hidden_dim=config["hidden_dim"],
in_drop_rate=config["in_drop_rate"],
neighs_num=config["neighs_num"],
hops_num=config["hops_num"],
node_type=config["node_type"],
edge_type=config["edge_type"],
full_graph_mode=config["full_graph_mode"],
unsupervised=config["unsupervised"],
)
trainer = LocalTFTrainer(
model_fn,
epoch=config["epoch"],
optimizer=get_tf_optimizer(
config["learning_algo"], config["learning_rate"], config["weight_decay"]
),
)
trainer.train()
embs = trainer.get_node_embedding()
np.save(config["emb_save_dir"], embs)
# define hyperparameters
config = {
"class_num": 128, # output dimension
"features_num": 50,
"batch_size": 512,
"categorical_attrs_desc": "",
"hidden_dim": 128,
"in_drop_rate": 0.5,
"hops_num": 2,
"neighs_num": [5, 5],
"full_graph_mode": False,
"agg_type": "gcn", # mean, sum
"learning_algo": "adam",
"learning_rate": 0.01,
"weight_decay": 0.0005,
"unsupervised": True,
"epoch": 1,
"emb_save_dir": "./id_emb",
"node_type": "protein",
"edge_type": "link",
}
```
## Run training process
After define training process and hyperparameters,
Now we can start the traning process with learning engine "lg" and the hyperparameters configurations.
```
train(config, lg)
```
| github_jupyter |
# Calculating the Hessian on simulator
```
import pennylane as qml
import numpy as np
from qiskit import IBMQ
import itertools
import matplotlib.pyplot as plt
datafile = "results_sim.pickle"
```
## Hardware-friendly circuit
```
n_wires = 5
dev = qml.device("default.qubit", wires=n_wires)
@qml.qnode(dev)
def layers(weights):
for i in range(n_wires):
qml.RX(weights[i], wires=i)
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[2, 1])
qml.CNOT(wires=[3, 1])
qml.CNOT(wires=[4, 3])
return qml.expval(qml.PauliZ(1))
seed = 2
weights = qml.init.basic_entangler_layers_uniform(n_layers=1, n_wires=5, seed=seed).flatten()
weights
grad = qml.grad(layers, argnum=0)
layers(weights)
np.round(grad(weights), 7)
```
## Calculating the Hessian
```
import pickle
s = 0.5 * np.pi
denom = 4 * np.sin(s) ** 2
shift = np.eye(len(weights))
def hess_gen_results(weights):
try:
with open(datafile, "rb") as f:
results = pickle.load(f)
except:
results = {}
for c in itertools.combinations(range(len(weights)), r=2):
print(c)
if not results.get(c):
weights_pp = weights + s * (shift[c[0]] + shift[c[1]])
weights_pm = weights + s * (shift[c[0]] - shift[c[1]])
weights_mp = weights - s * (shift[c[0]] - shift[c[1]])
weights_mm = weights - s * (shift[c[0]] + shift[c[1]])
f_pp = layers(weights_pp)
f_pm = layers(weights_pm)
f_mp = layers(weights_mp)
f_mm = layers(weights_mm)
results[c] = (f_pp, f_mp, f_pm, f_mm)
with open(datafile, "wb") as f:
pickle.dump(results, f)
for i in range(len(weights)):
print((i, i))
if not results.get((i, i)):
f_p = layers(weights + 0.5 * np.pi * shift[i])
f_m = layers(weights - 0.5 * np.pi * shift[i])
f = layers(weights)
results[(i, i)] = (f_p, f_m, f)
with open(datafile, "wb") as f:
pickle.dump(results, f)
def get_hess(weights):
hess = np.zeros((len(weights), len(weights)))
with open(datafile, "rb") as f:
results = pickle.load(f)
for c in itertools.combinations(range(len(weights)), r=2):
r = results[c]
hess[c] = (r[0] - r[1] - r[2] + r[3]) / denom
hess = hess + hess.T
for i in range(len(weights)):
r = results[(i, i)]
hess[i, i] = (r[0] + r[1] - 2 * r[2]) / 2
return hess
hess_gen_results(weights)
hess = get_hess(weights)
np.around(hess, 3)
```
| github_jupyter |
```
import numpy as np
import torch
import random
import os
import torchvision
from torch.utils.data import Dataset
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
from __future__ import print_function
import glob
from itertools import chain
import os
import random
import zipfile
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from linformer import Linformer
from PIL import Image
from sklearn.model_selection import train_test_split
from torch.optim.lr_scheduler import StepLR
from torch.utils.data import DataLoader, Dataset
from torchvision import datasets, transforms
from tqdm.notebook import tqdm
from vit_pytorch.efficient import ViT
print(f"Torch: {torch.__version__}")
# Training settings
batch_size = 64
epochs = 20
lr = 3e-5
gamma = 0.7
seed = 42
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
seed_everything(seed)
device = 'cuda'
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_data = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
shuffle=True, num_workers=4)
valid_data = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
valid_loader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size,
shuffle=False, num_workers=4)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# os.environ['MKL_SERVICE_FORCE_INTEL'] = 'False'
# !pip -q install vit_pytorch performer-pytorch
# efficient_transformer = Linformer(
# dim=128,
# seq_len=1+1, # 1x1 patch + 1 cls-token
# depth=12,
# heads=8,
# k=64
#)
from performer_pytorch import Performer
efficient_transformer = Performer(
dim = 128,
depth = 12,
heads = 8,
causal = True
)
model = ViT(
dim=128,
image_size=32,
patch_size=32,
num_classes=10,
transformer=efficient_transformer,
channels=3,
).to(device)
# loss function
criterion = nn.CrossEntropyLoss()
# optimizer
optimizer = optim.Adam(model.parameters(), lr=lr)
# scheduler
scheduler = StepLR(optimizer, step_size=1, gamma=gamma)
for epoch in range(epochs):
epoch_loss = 0
epoch_accuracy = 0
for data, label in tqdm(train_loader):
data = data.to(device)
label = label.to(device)
output = model(data)
loss = criterion(output, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc = (output.argmax(dim=1) == label).float().mean()
epoch_accuracy += acc / len(train_loader)
epoch_loss += loss / len(train_loader)
with torch.no_grad():
epoch_val_accuracy = 0
epoch_val_loss = 0
for data, label in valid_loader:
data = data.to(device)
label = label.to(device)
val_output = model(data)
val_loss = criterion(val_output, label)
acc = (val_output.argmax(dim=1) == label).float().mean()
epoch_val_accuracy += acc / len(valid_loader)
epoch_val_loss += val_loss / len(valid_loader)
print(
f"Epoch : {epoch+1} - loss : {epoch_loss:.4f} - acc: {epoch_accuracy:.4f} - val_loss : {epoch_val_loss:.4f} - val_acc: {epoch_val_accuracy:.4f}\n"
)
```
| github_jupyter |
## ml-mipt course
### Seminar 2: extra materials
### Linear Regression Loss functions and Probability interpretation
Based on [Evgeny Sokolov](https://github.com/esokolov) open materials.
## Данные
Для демонстраций загрузим набор данных [Automobile Data Set](https://archive.ics.uci.edu/ml/datasets/Automobile). В данных присутствуют категориальные, целочисленные и вещественнозначные признаки.
```
import pandas as pd
X_raw = pd.read_csv("data/automobile_dataset.csv", header=None, na_values=["?"])
X_raw.head()
y = X_raw[25]
X_raw = X_raw.drop(25, axis=1)
```
## Предобработка данных
Предобработка данных важна при применении любых методов машинного обучения, а в особенности для линейных моделей. В sklearn предобработку удобно делать с помощью модуля [preprocessing](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing) или методов библиотеки pandas.
```
from sklearn import preprocessing
```
### Заполнение пропусков
В матрице объекты-признаки могут быть пропущенные значения, и это вызовет исключение при попытке передать такую матрицу в функцию обучения модели или даже предобработки. Если пропусков немного, можно удалить объекты с пропусками из обучающей выборки. Заполнить пропуски можно разными способами:
* заполнить средними (mean, median);
* предсказывать пропущенные значения по непропущенным.
Последний вариант сложный и применяется редко. Для заполнения константами можно использовать метод датафрейма fillna, для замены средними - класс preprocessing.Imputer.
```
# для удобства работы с нашим датасетом создаем маску, указывающую на столбцы с категориальными признаками
cat_features_mask = (X_raw.dtypes == "object").values # категориальные признаки имеют тип "object"
# для вещественнозначных признаков заполним пропуски средними
X_real = X_raw[X_raw.columns[~cat_features_mask]]
mis_replacer = preprocessing.Imputer(strategy="mean")
X_no_mis_real = pd.DataFrame(data=mis_replacer.fit_transform(X_real), columns=X_real.columns)
# для категориальных - пустыми строками
X_cat = X_raw[X_raw.columns[cat_features_mask]].fillna("")
X_no_mis = pd.concat([X_no_mis_real, X_cat], axis=1)
X_no_mis.head()
```
Всегда нужно осознавать, случайны ли пропуски в каком-то признаке. Иногда факт отсутствия информации о значении признака может сам быть важным признаком, который необходимо добавить к другим признакам.
__Пример:__ предсказание возраста пользователя по данным с его телефона. Поскольку люди старшего возраста чаще пользуются простыми телефонами, факт отсутствия каких-то данных (например, истории посещенных интернет-страниц), скорее всего, будет хорошим признаком.
Для категориальных признаков рекомендуется создавать отдельную категорию, соответствующую пропущенному значению. В наши данных пропусков в категориальных признаках нет.
### Преобразование нечисловых признаков
Практически все методы машинного обучения требуют, чтобы на вход функции обучения подавалась вещественная матрица. В процессе обучения используются свойства вещественных чисел, в частности, возможность сравнения и применения арифметических операций. Поэтому, даже если формально в матрице объекты-признаки записаны числовые значения, нужно всегда анализировать, можно ли относиться к ним как к числам.
__Пример:__ некоторые признаки могут задаваться целочисленными хешами или id (например, id пользователя соц. сети), однако нельзя сложить двух пользователей и получить третьего, исходя из их id (как это может сделать линейная модель).
Это пример категориального признака, принимающего значения из неупорядоченного конечного множества $K$. К таким признакам обычно применяют [one-hot encoding](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features) (вместо одного признака создают $K$ бинарных признаков - по одному на каждое возможное значение исходного признака). В sklearn это можно сделать с помощью классов LabelEncoder + OneHotEncoding, но проще использовать функцию pd.get_dummies.
Следует заметить, что в новой матрице будет очень много нулевых значений. Чтобы не хранить их в памяти, можно задать параметр OneHotEncoder(sparse = True) или .get_dummies(sparse=True), и метод вернет [разреженную матрицу](http://docs.scipy.org/doc/scipy/reference/sparse.html), в которой хранятся только ненулевые значения. Выполнение некоторых операций с такой матрицей может быть неэффективным, однако большинство методов sklearn умеют работать с разреженными матрицами.
__Вопрос:__ какая проблема возникнет при применении такого способа кодирования для обучения линейной регрессии?
Необходимо удалить один из столбцов, созданных для каждого признака. Для этого в get_dummies надо поставить drop_first=True.
```
X_no_mis.shape
X_dum = pd.get_dummies(X_no_mis, drop_first=True)
print(X_dum.shape)
X_dum.head()
```
Помимо категориальных, преобразования требуют, например, строковые признаки. Их можно превращать в матрицу частот слов [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer), матрицу частот буквосочетаний фиксированной длины, можно извлекать другие признаки (например, длина строки).
### Масштабирование признаков
При начале работы с данными всегда рекомендуется приводить все признаки к одному масштабу. Это важно по нескольким причинам:
* ускорение обучения модели (пояснение на лекции);
* улучшение численной устойчивости при работе с матрицей объекты-признаки (рядом с нулем чисел с плавающей точкой больше, чем с области больших чисел)
* для линейных моделей: интерпретация весов при признаках как меры их значимости.
Первый популярный способ масштабирования - нормализация: вычитание среднего из каждого признака и деление на стандартное отклонение (StandardScaler в sklearn). Второй популярный способ: вычитание минимума из каждого признака, а затем деление на разницу максимального и минимального значения (MinMaxScaler в sklearn).
```
normalizer = preprocessing.MinMaxScaler()
X_real_norm_np = normalizer.fit_transform(X_dum)
X = pd.DataFrame(data=X_real_norm_np)
X.head()
```
### Добавление признаков
Особенно важным моментом для линейной регрессии является нелинейное преобразование признаков. Это позволяет использовать линейную регрессию для моделирования нелинейных зависимостей. Наиболее популярны следующие преобразования: полиномиальные признаки (PolynomialFeatures в sklearn), взятие логарифма, квадратного корня, применение тригонометрических функий.
Например, в нашем датасете зависимость целевой перемнной от шестого признака скорее квадратичная, чем линейная:
```
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
plt.scatter(X[6], y)
plt.scatter(X[6]**2, y)
```
А для признака номер 13 линеаризовать зависимость получается с помощью функции $\frac 1 {\sqrt{\cdot}}$
```
plt.scatter(X[13], y)
plt.scatter(1/np.sqrt(X[13]), y)
```
Обратите внимание, что при генерации полиномиальных признаков матрица объекты-признаки может занимать очень много памяти. Генерация полиномиальных признаков нужна, например, в случае, если вы хотите с помощью линейной регрессии настраивать полиномиальную модель зависимости целевого признака от данных.
| github_jupyter |
## Clase `Dice`
Implementar una clase denominada `Dice`, que funcione como un conjunto de $n$ dados. La clase debe recibir como argumento el número de dados a tirar. Además, la clase debe implementar:
- Un método de `roll()` que simula la tirada de los $n$ dados y guarda internamente una lista con los valores resultantes. No devuelve nada.
- Si no se ha llamado a `roll()`, la lista interna debe ser vacía.
- Un método `getLastRoll()` que devuelve la lista con los valores de la última tirada.
- Un método `getRollSum()` que devuelve la suma de los valores de la última tirada.
- Si no se ha llamado a `roll()`, debe devolver cero.
```
# Ayuda, utilizar la siguiente función del módulo random
import random
random.randint(1,6)
class Dice:
def __init__(self, n):
pass
def roll():
pass
def getLastRoll():
pass
def getRollSum():
pass
a = Dice(2)
a.getLastRoll()
# Devuelve lista vacía
a.getRollSum()
# Devuelve 0
a.roll()
a.getLastRoll()
# Devuelve lista con dos números
a.getRollSum()
# Devuelve la suma de los números de la lista anterior
```
## Herencia de la clase `Rectangle`
```
class Rectangle:
def __init__(self, length, width):
self.length = length
self.width = width
def area(self):
return self.width * self.length
def perimeter(self):
return (2*self.length) + (2*self.width)
class Square(Rectangle):
'''
Redefinir el método constructor, sin redefinir las funciones de área
y perímetro, para que un objeto de tipo Square con:
.area() devuelva el área de un cuadrado de tamaño length y
.perimeter() devuelva el perímetro del cuadrado
'''
def __init__(self, length):
# Tu código acá
pass
class Cube(Square):
'''
Heredar de Square todos los métodos (incluido el constructor) y
agregar un método para obtener el volumen de un objeto de tipo Cube.
'''
def volume(self):
# Tu código acá
pass
```
Nota: instancie algunos objetos y muestre que sus métodos funcionan correctamente.
## Programando nuestro propio *Blockchain*
Crear una clase denominada `Block` que almacene un diccionario con transacciones. La clase `Block` debe implementar:
- Atributo de transacciones, representado con un diccionario `transactions`. Las llaves pueden ser alfanuméricas y los valores del diccionario representan los débitos ($-$) o créditos ($+$) relativas a una transacción.
- Atributos `previousBlock` y `nextBlock` que apuntan a objetos tipo `Block` para referir al anterior y siguiente "bloque" en la cadena.
- Estos deben ser `None` por defecto en el constructor.
- Se deben programar métodos *getter* y *setter* para acceder y modificar estos atributos.
- Un método `getBlockID()` para obtener el ID de un bloque específico utilizando **variables de clase**.
- Un método `getTransactions()` para obtener **una copia** del diccionario de transacciones.
- La asignación consiste en completar el método `getBalanceFromHere()`, que computa el balance de las cuentas desde el bloque actual hasta el final del *blockchain*.
```
class Block:
blockCount = 0
# Inicializa el bloque con el diccionario de transacciones
def __init__(self, transactions={}, previousBlock=None, nextBlock=None):
self.transactions = transactions
self.previousBlock = None
self.nextBlock = None
self.blockID = Block.blockCount
Block.blockCount += 1
# Muestra el número de instancia y sus transacciones
def __str__(self):
return "Block %d: " % self.blockID + str(self.transactions)
# Obtener el ID del bloque
def getBlockID(self):
return self.blockID
def getTransactions(self):
return self.transactions.copy()
# Getters y setters para obtener bloque anterior y siguiente
def setPreviousBlock(self, block):
self.previousBlock = block
def getPreviousBlock(self):
return self.previousBlock
def setNextBlock(self, block):
self.nextBlock = block
block.setPreviousBlock(self)
def getNextBlock(self):
return self.nextBlock
''' Completar para computar la función de
balance a partir de este bloque. Devuelve un
diccionario con las llaves y su balance acumulado
a partir de este bloque (incluyendo las transacciones
de este bloque).
'''
def getBalanceFromHere(self):
# Tu código acá
pass
# Creamos algunos bloques con transacciones
B0 = Block({'A': 50, 'B': -10, 'C': 5})
B1 = Block({'A':-10, 'C':+10})
B2 = Block({'B':+10, 'C':+5})
print(B0, B1, B2, sep='\n')
# Configuramos la cadena de bloques
B0.setNextBlock(B1)
B1.setNextBlock(B2)
print(B0.getNextBlock())
print(B1.getNextBlock())
print(B1.getPreviousBlock())
print(B2.getPreviousBlock())
print(B0.getPreviousBlock())
# Devuelve None
print(B2.getNextBlock())
# Devuelve None
B1.getBalanceFromHere()
# Debe devolver {'A': -10, 'C': 15, 'B': 10}
B0.getBalanceFromHere()
# Debe devolver {'A': 40, 'B': 0, 'C': 20}
```
| github_jupyter |
```
from PIL import Image
# img = Image.open('data.tiff')
# img = Image.open('reference.tiff')
# img.show()
import numpy as np
from skimage.external.tifffile import imread
himg = imread("data.tiff")
himg.shape
print(np.max(himg))
himg = himg / 2**16
print(np.max(himg))
print(np.min(himg))
np.count_nonzero(himg < 0)
import matplotlib.pyplot as plt
hlen = len(himg[0,0,:])
print(hlen)
zero_count = []
for i in range(hlen):
zero_count.append(np.count_nonzero(himg[:,:,i] < 0))
if np.count_nonzero(himg[:,:,i] < 0) != 0:
print("%dの0の数%d"%(i, np.count_nonzero(himg[:,:,i] < 0)))
# print(min(zero_count))
# print(max(zero_count))
plt.plot(zero_count)
print(len(zero_count))
# img = Image.fromarray(himg[:,:,0])
img = Image.fromarray(np.uint8(himg[:,:,58]*255))
plt.imshow(np.uint8(himg[:,:,60]*255))
img.show()
img = np.asarray(img)
print(np.min(img))
print(np.max(img))
cmf = np.loadtxt("CIE1931-2deg-XYZ.csv",delimiter=",")
cmf = cmf[np.where(cmf[:,0] >= 400)]
cmf = cmf[::2]
cmf = cmf[:,1:]
cmf.shape
# cmfだけ
nhimg = himg[:,:,:44]
sd_light_source = np.loadtxt("lamp_spectrum.csv", skiprows=1,dtype="float")
sd_light_source = sd_light_source[np.where(sd_light_source[:,0] >= 400)]
sd_light_source = sd_light_source[::20,1:2]
sd_light_source = sd_light_source[:44]
print(sd_light_source)
sd_light_source = np.loadtxt("lamp_spectrum.csv", skiprows=1,dtype="float")
sd_light_source = sd_light_source[np.where(sd_light_source[:,0] >= 400)]
sd_light_source = sd_light_source[::20,1:2]
sd_light_source = sd_light_source[:44]
nmf_multi_ld = cmf * sd_light_source
print(nmf_multi_ld.shape)
x = nmf_multi_ld[:,0]
y = nmf_multi_ld[:,1]
z = nmf_multi_ld[:,2]
k = 100/np.sum(y)
print(k)
X = np.sum(x*nhimg, axis=2)
Y = np.sum(y*nhimg,axis=2)
Z = np.sum(z*nhimg, axis=2)
XYZ = np.stack([X,Y,Z], 2)
XYZ = XYZ * k
XYZ.shape
# xyz_to_rgb = np.array([[0.41847, -0.15866, -0.082835],[-0.091169, 0.25243, 0.015708], [0.00092090, -0.0025498, 0.17860]] )
# rgb_to_xyz = np.array([[0.4900, 0.31, 0.2],[0.17697, 0.81240, 0.010630], [0, 0.01, 0.99]])
rgb_to_xyz = 1/0.17697 * rgb_to_xyz
xyz_to_rgb = np.linalg.inv(rgb_to_xyz)
print(xyz_to_rgb.shape)
xyz_to_r = np.array([0.41847, -0.15866, -0.082835],)
r = np.dot(XYZ, xyz_to_r)
# print(r.shape)
xyz_to_g = np.array([-0.091169, 0.25243, 0.015708])
g = np.dot(XYZ, xyz_to_g)
xyz_to_b = np.array([0.00092090, -0.0025498, 0.17860])
b = np.dot(XYZ, xyz_to_b)
rgb_img = np.stack([r,g,b],axis=2)
print(rgb_img.shape)
print(np.max(rgb_img))
print(np.min(rgb_img))
img =Image.fromarray(np.uint8(rgb_img))
img.save("made_rgb_img_by_CIE1931_RGB.png")
rgb_img = np.dot(XYZ, xyz_to_rgb )
rgb_img.shape
print(rgb_img.shape)
print(np.max(rgb_img))
print(np.min(rgb_img))
# xyz_to_r = np.array([0.41847, -0.15866, -0.082835])
xyz_to_r = np.array([3.2410, -1.5374, -0.4986])
r = np.dot(XYZ, xyz_to_r)
# print(r.shape)
xyz_to_g = np.array([-0.9692, 1.8760, 0.0416])
g = np.dot(XYZ, xyz_to_g)
xyz_to_b = np.array([0.0556, -0.204, 1.0570])
b = np.dot(XYZ, xyz_to_b)
rgb_img2 = np.stack([r,g,b],axis=2)
print(rgb_img2.shape)
print(np.max(rgb_img2))
print(np.min(rgb_img2))
# img =Image.fromarray(rgb_img)
# img.save("made_rgb_img.png")
# rgb_img2*2**8
# print(np.max(rgb_img2*2**8))
# print(np.min(rgb_img2*2**8))
img =Image.fromarray(np.uint8(rgb_img2))
img.save("made_rgb_img.png")
xyz_to_rgb = np.array([[3.2406, -1.5372, -0.4986],[-0.9689, 1.8758, 0.0415],[0.0557, -0.204, 1.0570]])
# r = np.dot(XYZ, xyz_to_r)
# print(r.shape)
# rgb_img3 = np.dot(XYZ, xyz_to_rgb )
print(rgb_img3.shape)
print(np.max(rgb_img3))
print(np.min(rgb_img3))
# 2.3706743 -0.9000405 -0.4706338
# -0.5138850 1.4253036 0.0885814
# 0.0052982 -0.0146949 1.0093968
from colour import SpectralDistribution
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DiGyt/asrpy/blob/main/example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## ASRpy usage Example
---
This notebook will provide a simple example how to apply the Artifact Subspace Reconstruction method to a MNE-Python raw object.
You should be able to run this notebook directly from your browser by clicking on the `Open in Colab` link above.
---
First you need to install [ASRpy](https://github.com/DiGyt/asrpy) in your Python environment. If you're not working from a Jupyter Notebook, paste the below line (without the `!`) into your command line.
```
!pip install git+https://github.com/DiGyt/asrpy.git -q
```
Now, import all required libraries.
```
# import libraries
import mne
from mne.datasets import ssvep
from asrpy import ASR
```
Load a raw EEG recording and do some basic preprocessing (resampling, filtering).
```
# Load raw data
data_path = ssvep.data_path()
raw_fname = data_path + '/sub-02/ses-01/eeg/sub-02_ses-01_task-ssvep_eeg.vhdr'
raw = mne.io.read_raw_brainvision(raw_fname, preload=True, verbose=False)
# Set montage
montage = mne.channels.make_standard_montage('easycap-M1')
raw.set_montage(montage, verbose=False)
# downsample for faster computation
raw.resample(256)
# apply a highpass filter from 1 Hz upwards
raw.filter(1., None, fir_design='firwin') # replace baselining with high-pass
# Construct epochs
event_id = {'12hz': 255, '15hz': 155}
events, _ = mne.events_from_annotations(raw, verbose=False)
# epoching time frame
tmin, tmax = -0.1, 1.5
# create an uncleaned average (for comparison purposes)
noisy_avg = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=None, baseline=None, preload=True,
verbose=False).average()
```
## Use ASRpy with MNE raw objects.
ASRpy is implemented to work directly on MNE Raw data instances. As you can see below, you should be able to apply it to an MNE Raw object without any problems. If you want to fit your ASR on simple numpy arrays instead, please use `asrpy.asr_calibrate` and `asrpy.asr_process` instead.
```
# Apply the ASR
asr = ASR(sfreq=raw.info["sfreq"], cutoff=15)
asr.fit(raw)
raw = asr.transform(raw)
# Create an average using the cleaned data
clean_avg = mne.Epochs(raw, events, event_id, -0.1, 1.5, proj=False,
picks=None, baseline=None, preload=True,
verbose=False).average()
```
Done. Now we can plot the noisy vs. the clean data in order to compare them.
```
# set y axis limits
ylim = dict(eeg=[-10, 20])
# Plot image epoch before xdawn
noisy_avg.plot(spatial_colors=True, ylim=ylim,
titles="before ASR")
# Plot image epoch before xdawn
clean_avg.plot(spatial_colors=True, ylim=ylim,
titles="after ASR");
```
## Use ASRpy with numpy arrays.
If you are working with numpy arrays of EEG data (instead of MNE objects), you can use the `asr_calibrate` and `asr_process` functions to clean your data.
```
from asrpy import asr_calibrate, asr_process, clean_windows
# create a numpy array of EEG data from the MNE raw object
eeg_array = raw.get_data()
# extract the sampling frequency from the MNE raw object
sfreq = raw.info["sfreq"]
# (optional) make sure your asr is only fitted to clean parts of the data
pre_cleaned, _ = clean_windows(eeg_array, sfreq, max_bad_chans=0.1)
# fit the asr
M, T = asr_calibrate(pre_cleaned, sfreq, cutoff=15)
# apply it
clean_array = asr_process(eeg_array, sfreq, M, T)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/joanby/python-ml-course/blob/master/Update_T1_1_Data_Cleaning_Carga_de_datos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
```
# Carga de datos a través de la función read_csv
```
import pandas as pd
import os
mainpath = "/content/drive/My Drive/Curso Machine Learning con Python/datasets/"
filename = "titanic/titanic3.csv"
fullpath = os.path.join(mainpath, filename)
data = pd.read_csv(fullpath)
data.head()
```
### Ejemplos de los parámetros de la función read_csv
```
read.csv(filepath="/Users/JuanGabriel/Developer/AnacondaProjects/python-ml-course/datasets/titanic/titanic3.csv",
sep = ",",
dtype={"ingresos":np.float64, "edad":np.int32},
header=0,names={"ingresos", "edad"},
skiprows=12, index_col=None,
skip_blank_lines=False, na_filter=False
)
```
```
data2 = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt", sep=",") #CUIDADO: ES EL TXT; NO EL CSV
data2.head()
data2.columns.values
data_cols = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Columns.csv")
data_col_list = data_cols["Column_Names"].tolist()
data2 = pd.read_csv(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt",
header = None, names = data_col_list)
data2.columns.values
```
# Carga de datos a través de la función open
```
data3 = open(mainpath + "/" + "customer-churn-model/Customer Churn Model.txt",'r')
cols = data3.readline().strip().split(",")
n_cols = len(cols)
counter = 0
main_dict = {}
for col in cols:
main_dict[col] = []
for line in data3:
values = line.strip().split(",")
for i in range(len(cols)):
main_dict[cols[i]].append(values[i])
counter += 1
print("El data set tiene %d filas y %d columnas"%(counter, n_cols))
df3 = pd.DataFrame(main_dict)
df3.head()
```
## Lectura y escritura de ficheros
```
infile = mainpath + "/" + "customer-churn-model/Customer Churn Model.txt"
outfile = mainpath + "/" + "customer-churn-model/Table Customer Churn Model.txt"
with open(infile, "r") as infile1:
with open(outfile, "w") as outfile1:
for line in infile1:
fields = line.strip().split(",")
outfile1.write("\t".join(fields))
outfile1.write("\n")
df4 = pd.read_csv(outfile, sep = "\t")
df4.head()
```
# Leer datos desde una URL
```
medals_url = "http://winterolympicsmedals.com/medals.csv"
medals_data = pd.read_csv(medals_url)
medals_data.head()
```
#### Ejercicio de descarga de datos con urllib3
Vamos a hacer un ejemplo usando la librería urllib3 para leer los datos desde una URL externa, procesarlos y convertirlos a un data frame de *python* antes de guardarlos en un CSV local.
```
def downloadFromURL(url, filename, sep = ",", delim = "\n", encoding="utf-8",
mainpath = "/content/drive/My Drive/Curso Machine Learning con Python/datasets"):
#primero importamos la librería y hacemos la conexión con la web de los datos
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', url)
print("El estado de la respuesta es %d" %(r.status))
response = r.data ## CORREGIDO: eliminado un doble decode que daba error
#El objeto reponse contiene un string binario, así que lo convertimos a un string descodificándolo en UTF-8
str_data = response.decode(encoding)
#Dividimos el string en un array de filas, separándolo por intros
lines = str_data.split(delim)
#La primera línea contiene la cabecera, así que la extraemos
col_names = lines[0].split(sep)
n_cols = len(col_names)
#Generamos un diccionario vacío donde irá la información procesada desde la URL externa
counter = 0
main_dict = {}
for col in col_names:
main_dict[col] = []
#Procesamos fila a fila la información para ir rellenando el diccionario con los datos como hicimos antes
for line in lines:
#Nos saltamos la primera línea que es la que contiene la cabecera y ya tenemos procesada
if(counter > 0):
#Dividimos cada string por las comas como elemento separador
values = line.strip().split(sep)
#Añadimos cada valor a su respectiva columna del diccionario
for i in range(len(col_names)):
main_dict[col_names[i]].append(values[i])
counter += 1
print("El data set tiene %d filas y %d columnas"%(counter, n_cols))
#Convertimos el diccionario procesado a Data Frame y comprobamos que los datos son correctos
df = pd.DataFrame(main_dict)
print(df.head())
#Elegimos donde guardarlo (en la carpeta athletes es donde tiene más sentido por el contexto del análisis)
fullpath = os.path.join(mainpath, filename)
#Lo guardamos en CSV, en JSON o en Excel según queramos
df.to_csv(fullpath+".csv")
df.to_json(fullpath+".json")
df.to_excel(fullpath+".xls")
print("Los ficheros se han guardado correctamente en: "+fullpath)
return df
medals_df = downloadFromURL(medals_url, "athletes/downloaded_medals")
medals_df.head()
```
## Ficheros XLS y XLSX
```
mainpath = "/content/drive/My Drive/Curso Machine Learning con Python/datasets"
filename = "titanic/titanic3.xls"
titanic2 = pd.read_excel(mainpath + "/" + filename, "titanic3")
titanic3 = pd.read_excel(mainpath + "/" + filename, "titanic3")
titanic3.to_csv(mainpath + "/titanic/titanic_custom.csv")
titanic3.to_excel(mainpath + "/titanic/titanic_custom.xls")
titanic3.to_json(mainpath + "/titanic/titanic_custom.json")
```
| github_jupyter |
# Use AutoAI to predict credit risk with `ibm-watson-machine-learning`
This notebook demonstrates how to deploy in Watson Machine Learning service an AutoAI model created in `Generated Scikit-learn Notebook`
which is composed
during autoai experiments (in order to learn more about AutoAI experiments go to [experiments/autoai](https://github.com/IBM/watson-machine-learning-samples/tree/master/cpd4.0/notebooks/python_sdk/experiments/autoai)).
Some familiarity with bash is helpful. This notebook uses Python 3.8.
## Learning goals
The learning goals of this notebook are:
- Working with the Watson Machine Learning instance
- Online deployment of AutoAI model
- Scoring data using deployed model
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Model upload](#upload)
3. [Web service creation](#deploy)
4. [Scoring](#score)
5. [Clean up](#cleanup)
6. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Contact with your Cloud Pack for Data administrator and ask him for your account credentials
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `api_key`.
```
username = 'PASTE YOUR USERNAME HERE'
api_key = 'PASTE YOUR API_KEY HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"apikey": api_key,
"url": url,
"instance_id": 'openshift',
"version": '4.0'
}
```
Alternatively you can use `username` and `password` to authenticate WML services.
```
wml_credentials = {
"username": ***,
"password": ***,
"url": ***,
"instance_id": 'openshift',
"version": '4.0'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
- Click New Deployment Space
- Create an empty space
- Go to space `Settings` tab
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd4.0/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="upload"></a>
## 2. Upload model
In this section you will learn how to upload the model.
#### Download the data as an pandas DataFrame and AutoAI saved as scikit pipeline model using `wget`.
**Hint**: To install required packages exacute command `!pip install pandas wget numpy`.
We can exract model from executed AutoAI experiment using `ibm-watson-machine-learning` with following command: `experiment.optimizer(...).get_pipeline(astype='sklearn')`.
```
import os, wget
import pandas as pd
import numpy as np
filename = 'german_credit_data_biased_training.csv'
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd4.0/data/credit_risk/german_credit_data_biased_training.csv'
if not os.path.isfile(filename):
wget.download(url)
model_name = "model.pickle"
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd4.0/models/autoai/credit-risk/model.pickle'
if not os.path.isfile(model_name):
wget.download(url)
credit_risk_df = pd.read_csv(filename)
X = credit_risk_df.drop(['Risk'], axis=1)
y = credit_risk_df['Risk']
credit_risk_df.head()
```
#### Custom software_specification
Create new software specification based on default Python 3.7 environment extended by autoai-libs package.
```
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.8")
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd4.0/configs/config.yaml'
if not os.path.isfile('config.yaml'):
wget.download(url)
!cat config.yaml
```
`config.yaml` file describes details of package extention. Now you need to store new package extention with APIClient.
```
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "scikt with autoai-libs",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Extension for autoai-libs",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
```
#### Create new software specification and add created package extention to it.
```
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikt with autoai-libs",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
```
#### Get the details of created software specification
```
client.software_specifications.get_details(sw_spec_uid)
```
#### Load the AutoAI model saved as `scikit-learn` pipeline.
Depending on estimator type in autoai model pipeline may consist models from following frameworks:
- `xgboost`
- `lightgbm`
- `scikit-learn`
```
from joblib import load
pipeline = load(model_name)
```
#### Store the model
```
model_props = {
client.repository.ModelMetaNames.NAME: "AutoAI model",
client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
feature_vector = X.columns
published_model = client.repository.store_model(
model=pipeline,
meta_props=model_props,
training_data=X.values,
training_target=y.values,
feature_names=feature_vector,
label_column_names=['Risk']
)
published_model_uid = client.repository.get_model_id(published_model)
```
#### Get model details
```
client.repository.get_details(published_model_uid)
```
**Note:** You can see that model is successfully stored in Watson Machine Learning Service.
```
client.repository.list_models()
```
<a id="deploy"></a>
## 3. Create online deployment
You can use commands bellow to create online deployment for stored model (web service).
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of AutoAI model.",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
```
Get deployment id.
```
deployment_id = client.deployments.get_uid(created_deployment)
print(deployment_id)
```
<a id="score"></a>
## 4. Scoring
You can send new scoring records to web-service deployment using `score` method.
```
values = X.values
scoring_payload = {
"input_data": [{
'values': values[:5]
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
predictions
```
<a id="cleanup"></a>
## 5. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
see the steps in this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd4.0/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 6. Summary and next steps
You successfully completed this notebook! You learned how to use Watson Machine Learning for AutoA model deployment and scoring.
Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html) for more samples, tutorials, documentation, how-tos, and blog posts.
### Author
**Jan Sołtysik** Intern in Watson Machine Learning.
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
# Advanced MVO - custom objectives
PyPortfolioOpt has implemented some of the most common objective functions (e.g `min_volatility`, `max_sharpe`, `max_quadratic_utility`, `efficient_risk`, `efficient_return`). However, sometimes yoy may have an idea for a different objective function.
In this cookbook recipe, we cover:
- Mininimising transaction costs
- Custom convex objectives
- Custom nonconvex objectives
## Acquiring data
As discussed in the previous notebook, assets are an exogenous input (i.e you must come up with a list of tickers). We will use `yfinance` to download data for thesee tickers
```
import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
tickers = ["BLK", "BAC", "AAPL", "TM", "WMT",
"JD", "INTU", "MA", "UL", "CVS",
"DIS", "AMD", "NVDA", "PBI", "TGT"]
ohlc = yf.download(tickers, period="max")
prices = ohlc["Adj Close"]
prices.tail()
```
## Expected returns and risk models
In this notebook, we will use James-Stein shrinkage and semicovariance (which only penalises downside risk).
```
import pypfopt
pypfopt.__version__
from pypfopt import risk_models, expected_returns
from pypfopt import plotting
mu = expected_returns.capm_return(prices)
S = risk_models.semicovariance(prices)
mu.plot.barh(figsize=(10,5));
plotting.plot_covariance(S, plot_correlation=True);
```
## Min volatility with a transaction cost objective
Let's say that you already have a portfolio, and want to now optimize it. It could be quite expensive to completely reallocate, so you may want to take into account transaction costs. PyPortfolioOpt provides a simple objective to account for this.
Note: this objective will not play nicely with `max_sharpe`.
```
# Pretend that you started with a default-weight allocation
initial_weights = np.array([1/len(tickers)] * len(tickers))
from pypfopt import EfficientFrontier, objective_functions
ef = EfficientFrontier(mu, S)
# 1% broker commission
ef.add_objective(objective_functions.transaction_cost, w_prev=initial_weights, k=0.01)
ef.min_volatility()
weights = ef.clean_weights()
weights
```
Notice that many of the weights are 0.06667, i.e your original equal weight. In fact, the only change has been an allocation of AMD's weight to JD. If we lower the cost `k`, the allocation will change more:
```
ef = EfficientFrontier(mu, S)
ef.add_objective(objective_functions.transaction_cost, w_prev=initial_weights, k=0.001)
ef.min_volatility()
weights = ef.clean_weights()
weights
```
The optimizer seems to really like JD. The reason for this is that it is highly anticorrelated to other assets (notice the dark column in the covariance plot). Hence, historically, it adds a lot of diversification. But it is dangerous to place too much emphasis on what happened in the past, so we may want to limit the asset weights.
In addition, we notice that 4 stocks have now been allocated zero weight, which may be undesirable. Both of these problems can be fixed by adding an [L2 regularisation objective](https://pyportfolioopt.readthedocs.io/en/latest/EfficientFrontier.html#more-on-l2-regularisation).
```
ef = EfficientFrontier(mu, S)
ef.add_objective(objective_functions.transaction_cost, w_prev=initial_weights, k=0.001)
ef.add_objective(objective_functions.L2_reg)
ef.min_volatility()
weights = ef.clean_weights()
weights
```
This has had too much of an evening-out effect. After all, if the resulting allocation is going to be so close to equal weights, we may as well stick with our initial allocation. We can reduce the strength of the L2 regularisation by reducing `gamma`:
```
ef = EfficientFrontier(mu, S)
ef.add_objective(objective_functions.transaction_cost, w_prev=initial_weights, k=0.001)
ef.add_objective(objective_functions.L2_reg, gamma=0.05) # default is 1
ef.min_volatility()
weights = ef.clean_weights()
weights
ef.portfolio_performance(verbose=True);
```
This portfolio is now reasonably balanced, but also puts significantly more weight on JD.
```
pd.Series(weights).plot.pie(figsize=(10,10));
```
## Custom convex objectives
PyPortfolioOpt comes with the following built-in objective functions, as of v1.2.1:
- Portfolio variance (i.e square of volatility)
- Portfolio return
- Sharpe ratio
- L2 regularisation (minimising this reduces nonzero weights)
- Quadratic utility
- Transaction cost model (a simple one)
However, you may want have a different objective. If this new objective is **convex**, you can optimize a portfolio with the full benefit of PyPortfolioOpt's modular syntax, for example adding other constraints and objectives.
To demonstrate this, we will minimise the **logarithmic-barrier** function suggested in the paper 60 Years of Portfolio Optimization, by Kolm et al (2014):
$$f(w, S, k) = w^T S w - k \sum_{i=1}^N \ln w$$
We must first convert this mathematical objective into the language of cvxpy. Cvxpy is a powerful modelling language for convex optimization problems. It is clean and easy to use, the only caveat is that objectives must be expressed with `cvxpy` functions, a list of which can be found [here](https://www.cvxpy.org/tutorial/functions/index.html).
```
import cvxpy as cp
# Note: functions are minimised. If you want to maximise an objective, stick a minus sign in it.
def logarithmic_barrier_objective(w, cov_matrix, k=0.1):
log_sum = cp.sum(cp.log(w))
var = cp.quad_form(w, cov_matrix)
return var - k * log_sum
```
Once we have written the objective function, we can just use the `ef.convex_objective()` to minimise the objective.
```
ef = EfficientFrontier(mu, S, weight_bounds=(0.01, 0.2))
ef.convex_objective(logarithmic_barrier_objective, cov_matrix=S, k=0.001)
weights = ef.clean_weights()
weights
ef.portfolio_performance(verbose=True);
```
This is compatible with all the constraints discussed in the previous recipe. Let's say that we want to limit JD's weight to 15%.
```
ef = EfficientFrontier(mu, S, weight_bounds=(0.01, 0.2))
jd_index = ef.tickers.index("JD") # get the index of JD
ef.add_constraint(lambda w: w[jd_index] <= 0.15)
ef.convex_objective(logarithmic_barrier_objective, cov_matrix=S, k=0.001)
weights = ef.clean_weights()
weights
```
## Custom nonconvex objectives
In some cases, you may be trying to optimize for nonconvex objectives. Optimization in general is a very hard problem, so please be aware that you may have mixed results in that case. Convex problems, on the other hand, are well understood and can be solved with nice theoretical guarantees.
PyPortfolioOpt does offer some functionality for nonconvex optimization, but it is not really encouraged. In particular, nonconvex optimization is not compatible with PyPortfolioOpt's modular constraints API.
As an example, we will use the Deviation Risk Parity objective from Kolm et al (2014). Because we are not using a convex solver, we don't have to define it using `cvxpy` functions.
```
def deviation_risk_parity(w, cov_matrix):
diff = w * np.dot(cov_matrix, w) - (w * np.dot(cov_matrix, w)).reshape(-1, 1)
return (diff ** 2).sum().sum()
ef = EfficientFrontier(mu, S, weight_bounds=(0.01, 0.12))
ef.nonconvex_objective(deviation_risk_parity, ef.cov_matrix)
weights = ef.clean_weights()
weights
```
However, let's say we now want to enforce that JD has a weight of 10%. In the convex case, this would be as simple as:
```python
ef.add_objective(lambda w: w[jd_index] == 0.10)
```
But unfortunately, scipy does not allow for such intuitive syntax. You will need to rearrange your constraints to make them either `=0` or `<= 0`.
```python
constraints = [
# First constraint
{"type": "eq", # equality constraint,
"fun": lambda w: w[1] - 0.2}, # the equality functions are assumed to = 0
# Second constraint
{"type": "ineq", # inequality constraint
"fun": lambda w: w[0] - 0.5} # inequality functions <= 0
]
```
For more information, you can consult the [scipy docs](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html), but they aren't very helpful.
```
ef = EfficientFrontier(mu, S, weight_bounds=(0.01, 0.12))
ef.nonconvex_objective(
deviation_risk_parity,
objective_args=S,
weights_sum_to_one=True,
constraints=[
{"type": "eq", "fun": lambda w: w[jd_index] - 0.10},
],
)
weights = ef.clean_weights()
weights
```
## More examples of nonconvex objectives
The scipy format is not intuitive and is hard to explain, so here are a bunch of examples (adapted from the tests). Some of these are actually convex, so you should use `convex_objective` instead.
```
# Another example of deviation risk parity
def deviation_risk_parity(w, cov_matrix):
n = cov_matrix.shape[0]
rp = (w * (cov_matrix @ w)) / cp.quad_form(w, cov_matrix)
return cp.sum_squares(rp - 1 / n).value
ef = EfficientFrontier(mu, S)
ef.nonconvex_objective(deviation_risk_parity, ef.cov_matrix)
weights = ef.clean_weights()
weights
# Deviation risk parity with weight bound on the first asset
ef = EfficientFrontier(mu, S)
ef.nonconvex_objective(deviation_risk_parity,
ef.cov_matrix,
constraints=[{"type":"eq", "fun":lambda w: w[0] - 0.1}])
weights = ef.clean_weights()
weights
# Market-neutral efficient risk.
# Please use ef.efficient_risk() for anything serious.
target_risk = 0.19
ef = EfficientFrontier(mu, S, weight_bounds=(None, None))
# Weights sum to zero
weight_constr = {"type": "eq", "fun": lambda w: np.sum(w)}
# Portfolio vol less than target vol
risk_constr = {
"type": "eq",
"fun": lambda w: target_risk ** 2 - np.dot(w.T, np.dot(ef.cov_matrix, w)),
}
constraints = [weight_constr, risk_constr]
ef.nonconvex_objective(
lambda w, mu: -w.T.dot(mu), # min negative return i.e max return
objective_args=(ef.expected_returns),
weights_sum_to_one=False,
constraints=constraints,
)
weights = ef.clean_weights()
weights
# Utility objective - you could actually use ef.max_quadratic_utility
ef = EfficientFrontier(mu, S)
def utility_obj(weights, mu, cov_matrix, k=1):
return -weights.dot(mu) + k * np.dot(weights.T, np.dot(cov_matrix, weights))
ef.nonconvex_objective(
utility_obj,
objective_args=(ef.expected_returns, ef.cov_matrix, 1)
# default is for weights to sum to 1
)
weights = ef.clean_weights()
weights
ef.weights.sum()
# Kelly objective with weight bounds on zeroth asset
def kelly_objective(w, e_returns, cov_matrix, k=3):
variance = np.dot(w.T, np.dot(cov_matrix, w))
objective = variance * 0.5 * k - np.dot(w, e_returns)
return objective
lower_bounds, upper_bounds = 0.01, 0.3
ef = EfficientFrontier(mu, S)
ef.nonconvex_objective(
kelly_objective,
objective_args=(ef.expected_returns, ef.cov_matrix, 1000),
constraints=[
{"type": "eq", "fun": lambda w: np.sum(w) - 1},
{"type": "ineq", "fun": lambda w: w[0] - lower_bounds},
{"type": "ineq", "fun": lambda w: upper_bounds - w[0]},
],
)
weights = ef.clean_weights()
weights
```
| github_jupyter |
# Customizing Federated Computations
In this next part of the tutorial it will be up to you to customize the `BaseModelOwner` or the `BaseDataOwner` to implement a new way of computing the gradients and securely aggregating them.
### Boilerplate
First up is the boilerplate code from the previous part. This includes configuring TF Encrypted and importing all of the dependencies. We've removed the `default_model_fn` and `secure_mean` functions as you'll write new version of those.
```
import tensorflow as tf
import tf_encrypted as tfe
players = [
'server0',
'server1',
'crypto-producer',
'model-owner',
'data-owner-0',
'data-owner-1',
'data-owner-2',
]
config = tfe.EagerLocalConfig(players)
tfe.set_config(config)
tfe.set_protocol(tfe.protocol.Pond())
from players import BaseModelOwner, BaseDataOwner
from func_lib import default_model_fn, secure_mean, evaluate_classifier
from util import split_dataset
from download import download_mnist
NUM_DATA_OWNERS = 3
BATCH_SIZE = 256
DATA_ITEMS = 60000
BATCHES = DATA_ITEMS // NUM_DATA_OWNERS // BATCH_SIZE
LEARNING_RATE = 0.01
```
### Implementing Reptile Meta-Learning Algorithm
In this section you will use the information from the previous tutorial to help implement new functions for the `model_fn` and the `aggregator_fn`. We also recommend checking out the implementations of `default_model_fn` and `secure_mean` located in [func_lib.py](./func_lib.py) for some help figuring out where to start.
We've done this with reptile and recommend following through with this but if you have another idea feel free to implement that!
TODO: might need to add details to the below paragraph
The reptile meta-learning algorithm computes k steps of SGD. When paired with the secure_aggregation aggregator_fn, this model_fn corresponds to using g_k as the outer gradient update. See the Reptile paper for more: https://arxiv.org/abs/1803.02999
```
def reptile_model_fn(data_owner, iterations=3,
grad_fn=default_model_fn, **kwargs):
for _ in range(iterations):
grads_k = grad_fn(data_owner, **kwargs)
data_owner.optimizer.apply_gradients(
zip(grads_k, data_owner.model.trainable_variables),
)
return [var.read_value() for var in data_owner.model.trainable_variables]
def secure_reptile(collected_inputs, model):
aggr_weights = secure_mean(collected_inputs)
weights_deltas = [
weight - update for (weight, update) in zip(
model.trainable_variables, aggr_weights,
)
]
return weights_deltas
```
### Customize Base Classes
```
class ModelOwner(BaseModelOwner):
@classmethod
def model_fn(cls, data_owner):
return reptile_model_fn(data_owner)
@classmethod
def aggregator_fn(cls, model_gradients, model):
return secure_reptile(model_gradients, model)
@classmethod
def evaluator_fn(cls, model_owner):
return evaluate_classifier(model_owner)
# TODO its not super clear how DataOwner should come into the picture here when customizing ModelOwner is sufficient
class DataOwner(BaseDataOwner):
pass
```
### Continue Boilerplate
In this section we continue the fill in some of the boilerplate code from the previous tutorial.
```
download_mnist()
split_dataset("./data", NUM_DATA_OWNERS, DATA_ITEMS)
model = tf.keras.Sequential((
tf.keras.layers.Dense(512, input_shape=[None, 28 * 28],
activation='relu'),
tf.keras.layers.Dense(10),
))
model.build()
loss = tf.keras.losses.sparse_categorical_crossentropy
opt = tf.keras.optimizers.Adam(LEARNING_RATE)
model_owner = ModelOwner("model-owner",
"{}/train.tfrecord".format("./data"),
model, loss,
optimizer=opt)
```
In this next part consider how you might use another learning rate to customize the reptile training loop.
```
# Simplify this with a loop?
data_owners = [DataOwner("data-owner-{}".format(i),
"{}/train{}.tfrecord".format("./data", i),
model, loss,
optimizer=opt)
for i in range(NUM_DATA_OWNERS)]
```
Now train! Remember we're using TensorFlow 2.0 so it should be easy to explore the computations and see the actual values being passed around in the computations. You can use this to help debug any problems run into while implementing the reptile meta-learning algorithm.
```
model_owner.fit(data_owners, rounds=BATCHES, evaluate_every=10)
print("\nDone training!!")
```
| github_jupyter |
# Implementing a Recommender System with SageMaker, MXNet, and Gluon
## _**Making Product - Shoes Recommendations Using Neural Networks and Embeddings**_
---
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Explore](#Explore)
1. [Clean](#Clean)
1. [Prepare](#Prepare)
1. [Train Locally](#Train-Locally)
1. [Define Network](#Define-Network)
1. [Set Parameters](#Set-Parameters)
1. [Execute](#Execute)
1. [Train with SageMaker](#Train-with-SageMaker)
1. [Wrap Code](#Wrap-Code)
1. [Move Data](#Move-Data)
1. [Submit](#Submit)
1. [Host](#Host)
1. [Evaluate](#Evaluate)
1. [Wrap-up](#Wrap-up)
---
## Background
#### In many ways, recommender systems were a catalyst for the current popularity of machine learning. One of Amazon's earliest successes was the "Customers who bought this, also bought..." feature, while the million dollar Netflix Prize spurred research, raised public awareness, and inspired numerous other data science competitions.
#### Recommender systems can utilize a multitude of data sources and ML algorithms, and most combine various unsupervised, supervised, and reinforcement learning techniques into a holistic framework. However, the core component is almost always a model which which predicts a user's rating (or purchase) for a certain item based on that user's historical ratings of similar items as well as the behavior of other similar users. The minimal required dataset for this is a history of user item ratings. In our case, we'll use 1 to 5 star ratings from over 2M Amazon customers. More details on this dataset can be found at its [AWS Public Datasets page](https://s3.amazonaws.com/amazon-reviews-pds/readme.html).
#### Matrix factorization has been the cornerstone of most user-item prediction models. This method starts with the large, sparse, user-item ratings in a single matrix, where users index the rows, and items index the columns. It then seeks to find two lower-dimensional, dense matrices which, when multiplied together, preserve the information and relationships in the larger matrix.

### ** Matrix factorization has been extended and genarlized with deep learning and embeddings. These techniques allows us to introduce non-linearities for enhanced performance and flexibility. This notebook will fit a neural network-based model to generate recommendations for the Amazon dataset. It will start by exploring our data in the notebook and even training a model on a sample of the data. Later we'll expand to the full dataset and fit our model using a SageMaker managed training cluster. We'll then deploy to an endpoint and check our method.
---
## Setup
#### _This notebook was created and tested on an ml.p2.xlarge notebook instance._
#### Let's start by specifying:
#### - The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
#### - The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the `get_execution_role()` call with the appropriate full IAM role arn string(s).
```
!pip install ipython-autotime
#### To measure all running time
# https://github.com/cpcloud/ipython-autotime
%load_ext autotime
bucket = 'dse-cohort5-group1'
prefix = 'sagemaker/amazon_reviews_us_Shoes_v1_00'
import sagemaker
role = sagemaker.get_execution_role()
```
Now let's load the Python libraries we'll need for the remainder of this example notebook.
```
# Install a scikit-image package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install scikit-image==0.14.2
import os
import mxnet as mx
from mxnet import gluon, nd, ndarray
from mxnet.metric import MSE
import pandas as pd
import numpy as np
import sagemaker
from sagemaker.mxnet import MXNet
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
import random_tuner as rt
import boto3
import json
import matplotlib.pyplot as plt
# for basic visualizations
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
# for advanced visualizations
import plotly.offline as py
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from plotly import tools
init_notebook_mode(connected = True)
import plotly.figure_factory as ff
```
---
## Data - https://s3.amazonaws.com/amazon-reviews-pds/tsv/index.txt
### Explore
Let's start by bringing in our dataset from an S3 public bucket.
More details on this dataset can be found at its [AWS Public Datasets page](https://s3.amazonaws.com/amazon-reviews-pds/readme.html).
_Note, because this dataset is over a half gigabyte, the load from S3 may take ~10 minutes. Also, since Amazon SageMaker Notebooks start with a 5GB persistent volume by default, and we don't need to keep this data on our instance for long, we'll bring it to the temporary volume (which has up to 20GB of storage)._
```
!rm -rf /tmp/recsys/
!aws s3 ls s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_df.csv.gz
!rm -rf /tmp/recsys/
!mkdir /tmp/recsys/
!aws s3 cp s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_df.csv.gz /tmp/recsys/
```
Let's read the data into a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) so that we can begin to understand it.
*Note, we'll set `error_bad_lines=False` when reading the file in as there appear to be a very small number of records which would create a problem otherwise.*
```
!ls -alh /tmp/recsys/Apparel_Jewelry_Shoes_df.csv.gz
df = pd.read_csv('/tmp/recsys/Apparel_Jewelry_Shoes_df.csv.gz', error_bad_lines=False)
df.sample(n=10)
!ls -alh
```
# ** Amazon product dataset data analysis
We can see this dataset includes information like:
- `marketplace`: 2-letter country code (in this case all "US").
- `customer_id`: Random identifier that can be used to aggregate reviews written by a single author.
- `review_id`: A unique ID for the review.
- `product_id`: The Amazon Standard Identification Number (ASIN). `http://www.amazon.com/dp/<ASIN>` links to the product's detail page.
- `product_parent`: The parent of that ASIN. Multiple ASINs (color or format variations of the same product) can roll up into a single parent parent.
- `product_title`: Title description of the product.
- `product_category`: Broad product category that can be used to group reviews (in this case this products).
- `star_rating`: The review's rating (1 to 5 stars).
- `helpful_votes`: Number of helpful votes for the review.
- `total_votes`: Number of total votes the review received.
- `vine`: Was the review written as part of the [Vine](https://www.amazon.com/gp/vine/help) program?
- `verified_purchase`: Was the review from a verified purchase?
- `review_headline`: The title of the review itself.
- `review_body`: The text of the review.
- `review_date`: The date the review was written.
- `catalog`: The date cataglory
For this example, let's limit ourselves to `customer_id`, `product_id`, and `star_rating`. Including additional features in our recommendation system could be beneficial, but would require substantial processing (particularly the text data) which would take us beyond the scope of this notebook.
*Note: we'll keep `product_title` on the dataset to help verify our recommendations later in the notebook, but it will not be used in algorithm training.*
### Because most people haven't use most products, and people rate fewer products than we actually watch, we'd expect our data to be sparse. Our algorithm should work well with this sparse problem in general, but we may still want to clean out some of the long tail. Let's look at some basic percentiles to confirm.
```
# shape of data
df.shape
# Describing the data set
df.describe()
# checking if there is any null data or not
df.isnull().sum()
# remove numm data
df = df.dropna()
# checking if there is any null data or not
df.isnull().sum()
# Describing the data according to the ratings
df.groupby('star_rating').describe()
df.columns
df = df[['customer_id', 'product_id', 'star_rating', 'product_parent', 'product_category', 'product_title', 'helpful_votes']]
df.shape
df.sample(n=10)
```
## Select voted review only
```
df.shape
df = df[df['helpful_votes'] > 0]
df.shape
12005951-3043992
customers = df['customer_id'].value_counts()
products = df['product_id'].value_counts()
quantiles = [0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.96, 0.97, 0.98, 0.99, 1]
print('customers\n', customers.quantile(quantiles))
print('products\n', products.quantile(quantiles))
```
### Clean
#### As we can see, only about 1% of customers have rated 7 or more products, and only 1% of products have been rated by 11+ customers.
## Let's filter out this long tail.
```
customers = customers[customers >= 8]
products = products[products >= 12]
reduced_df = df.merge(pd.DataFrame({'customer_id': customers.index})).merge(pd.DataFrame({'product_id': products.index}))
reduced_df.shape
reduced_df.to_csv('Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv', index_label=False)
!ls -alh Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv
!rm -rf Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv.gz
!gzip Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv
!ls -alh
!aws s3 cp Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv.gz s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv.gz
```
Now, we'll recreate our customer and product lists since there are customers with more than 5 reviews, but all of their reviews are on products with less than 5 reviews (and vice versa).
```
!aws s3 ls s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_help_voted_And_cut_lognTail.csv.gz
customers = reduced_df['customer_id'].value_counts()
products = reduced_df['product_id'].value_counts()
```
Next, we'll number each user and item, giving them their own sequential index. This will allow us to hold the information in a sparse format where the sequential indices indicate the row and column in our ratings matrix.
```
customer_index = pd.DataFrame({'customer_id': customers.index, 'user': np.arange(customers.shape[0])})
product_index = pd.DataFrame({'product_id': products.index,
'item': np.arange(products.shape[0])})
reduced_df = reduced_df.merge(customer_index).merge(product_index)
reduced_df.head()
reduced_df.shape
```
### Prepare
Let's start by splitting in training and test sets. This will allow us to estimate the model's accuracy on product our customers rated, but wasn't included in our training.
```
test_df = reduced_df.groupby('customer_id').last().reset_index()
train_df = reduced_df.merge(test_df[['customer_id', 'product_id']],
on=['customer_id', 'product_id'],
how='outer',
indicator=True)
train_df = train_df[(train_df['_merge'] == 'left_only')]
```
Now, we can convert our Pandas DataFrames into MXNet NDArrays, use those to create a member of the SparseMatrixDataset class, and add that to an MXNet Data Iterator. This process is the same for both test and control.
```
batch_size = 1024
train = gluon.data.ArrayDataset(nd.array(train_df['user'].values, dtype=np.float32),
nd.array(train_df['item'].values, dtype=np.float32),
nd.array(train_df['star_rating'].values, dtype=np.float32))
test = gluon.data.ArrayDataset(nd.array(test_df['user'].values, dtype=np.float32),
nd.array(test_df['item'].values, dtype=np.float32),
nd.array(test_df['star_rating'].values, dtype=np.float32))
train_iter = gluon.data.DataLoader(train, shuffle=True, num_workers=4, batch_size=batch_size, last_batch='rollover')
test_iter = gluon.data.DataLoader(train, shuffle=True, num_workers=4, batch_size=batch_size, last_batch='rollover')
```
---
## Train Locally
### Define Network
Let's start by defining the neural network version of our matrix factorization task. In this case, our network is quite simple. The main components are:
- [Embeddings](https://mxnet.incubator.apache.org/api/python/gluon/nn.html#mxnet.gluon.nn.Embedding) which turn our indexes into dense vectors of fixed size. In this case, 64.
- [Dense layers](https://mxnet.incubator.apache.org/api/python/gluon.html#mxnet.gluon.nn.Dense) with ReLU activation. Each dense layer has the same number of units as our number of embeddings. Our ReLU activation here also adds some non-linearity to our matrix factorization.
- [Dropout layers](https://mxnet.incubator.apache.org/api/python/gluon.html#mxnet.gluon.nn.Dropout) which can be used to prevent over-fitting.
- Matrix multiplication of our user matrix and our item matrix to create an estimate of our rating matrix.
```
# Matrix factorization
class MFBlock(gluon.HybridBlock):
def __init__(self, max_users, max_items, num_emb, dropout_p=0.5):
super(MFBlock, self).__init__()
self.max_users = max_users
self.max_items = max_items
self.dropout_p = dropout_p
self.num_emb = num_emb
with self.name_scope():
self.user_embeddings = gluon.nn.Embedding(max_users, num_emb)
self.item_embeddings = gluon.nn.Embedding(max_items, num_emb)
self.dropout_user = gluon.nn.Dropout(dropout_p)
self.dropout_item = gluon.nn.Dropout(dropout_p)
self.dense_user = gluon.nn.Dense(num_emb, activation='relu')
self.dense_item = gluon.nn.Dense(num_emb, activation='relu')
def hybrid_forward(self, F, users, items):
a = self.user_embeddings(users)
a = self.dense_user(a)
b = self.item_embeddings(items)
b = self.dense_item(b)
predictions = self.dropout_user(a) * self.dropout_item(b)
predictions = F.sum(predictions, axis=1)
return predictions
```
## Plot model
```
# print(net.summary)
import mxnet as mx
user = mx.symbol.Variable('user')
item = mx.symbol.Variable('item')
score = mx.symbol.Variable('score')
# Set dummy dimensions
k = 64
max_user = 100
max_item = 50
# user feature lookup
user = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k)
user_drop = mx.symbol.Dropout(data = user)
_user = user * user_drop
# item feature lookup
item = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k)
item_drop = mx.symbol.Dropout(data = item)
_item = item * item_drop
# user = mx.symbol.Dropout()
# predict by the inner product, which is elementwise product and then sum
net = _user * _item
# net = mx.symbol.sum_axis(data = net, axis = 1)
net = mx.symbol.Dropout(data = net)
net = mx.symbol.Flatten(data = net)
# loss layer
# net = mx.symbol.LinearRegressionOutput(data = net, label = score)
# Visualize your network
mx.viz.plot_network(net)
num_embeddings = 64
net = MFBlock(max_users=customer_index.shape[0],
max_items=product_index.shape[0],
num_emb=num_embeddings,
dropout_p=0.5)
type(net)
```
### Set Parameters
Let's initialize network weights and set our optimization parameters.
## Set optimization parameters
```
# Set optimization parameters
opt = 'sgd'
lr = 0.02
momentum = 0.9
wd = 0.
# Initialize network parameters
ctx = mx.gpu()
print("mx.gpu(): ", ctx)
net.collect_params().initialize(mx.init.Xavier(magnitude=60),
ctx=ctx,
force_reinit=True)
net.hybridize()
trainer = gluon.Trainer(net.collect_params(),
opt,
{'learning_rate': lr,
'wd': wd,
'momentum': momentum})
```
### Execute
Let's define a function to carry out the training of our neural network.
```
train_mse_list=[]
train_mse_list.append
def execute(train_iter, test_iter, net, epochs, ctx):
loss_function = gluon.loss.L2Loss()
for e in range(epochs):
print("epoch: {}".format(e))
for i, (user, item, label) in enumerate(train_iter):
user = user.as_in_context(ctx)
item = item.as_in_context(ctx)
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(user, item)
loss = loss_function(output, label)
loss.backward()
trainer.step(batch_size)
print("EPOCH {}: MSE ON TRAINING and TEST: {}. {}".format(e,
eval_net(train_iter, net, ctx, loss_function),
eval_net(test_iter, net, ctx, loss_function)))
print("end of training")
return net
```
#### Let's also define a function which evaluates our network on a given dataset. This is called by our `execute` function above to provide mean squared error values on our training and test datasets.
# !!! Evaluation function
```
def eval_net(data, net, ctx, loss_function):
acc = MSE()
for i, (user, item, label) in enumerate(data):
user = user.as_in_context(ctx)
item = item.as_in_context(ctx)
label = label.as_in_context(ctx)
predictions = net(user, item).reshape((batch_size, 1))
acc.update(preds=[predictions], labels=[label])
return acc.get()[1]
```
Now, let's train for a few epochs.
```
%%time
epochs = 3
trained_net = execute(train_iter, test_iter, net, epochs, ctx)
```
# train and prediction in Local
```
%%time
# Set optimization parameters
epochs = 100
opt = 'sgd'
lr = 0.02
momentum = 0.9
wd = 0.
trainer = gluon.Trainer(net.collect_params(),
opt,
{'learning_rate': lr,
'wd': wd,
'momentum': momentum})
trained_net = execute(train_iter, test_iter, net, epochs, ctx)
trained_net.summary
```
#### Early Validation
We can see our training error going down, but our validation accuracy bounces around a bit. Let's check how our model is predicting for an individual user. We could pick randomly, but for this case, let's try user #6.
```
product_index['u6_predictions'] = trained_net(nd.array([6] * product_index.shape[0]).as_in_context(ctx),
nd.array(product_index['item'].values).as_in_context(ctx)).asnumpy()
product_index.sort_values('u6_predictions', ascending=False)
```
Now let's compare this to the predictions for another user (we'll try user #7).
```
product_index['u7_predictions'] = trained_net(nd.array([7] * product_index.shape[0]).as_in_context(ctx),
nd.array(product_index['item'].values).as_in_context(ctx)).asnumpy()
product_index.sort_values('u7_predictions', ascending=False)
```
The predicted ratings are different between the two users, but the same top (and bottom) items for user #6 appear for #7 as well. Let's look at the correlation across the full set of 38K items to see if this relationship holds.
```
product_index[['u6_predictions', 'u7_predictions']].plot.scatter('u6_predictions', 'u7_predictions')
plt.show()
```
We can see that this correlation is nearly perfect. Essentially the average rating of items dominates across users and we'll recommend the same well-reviewed items to everyone. As it turns out, we can add more embeddings and this relationship will go away since we're better able to capture differential preferences across users.
However, with just a 64 dimensional embedding, it took 7 minutes to run just 3 epochs. If we ran this outside of our Notebook Instance we could run larger jobs and move on to other work would improve productivity.
# Predict for all users in local
```
%%time
# Set optimization parameters
epochs = 500
opt = 'sgd'
lr = 0.02
momentum = 0.9
wd = 0.
trainer = gluon.Trainer(net.collect_params(),
opt,
{'learning_rate': lr,
'wd': wd,
'momentum': momentum})
trained_net = execute(train_iter, test_iter, net, epochs, ctx)
products.head()
reduced_df.columns
reduced_df.head(n=2)
# customer_index_list=[1,2,3]
customer_index_list = customer_index['user'].tolist()
# # 0 to 1000 customer
# customer_index_list = [*range(0, 1000, 1)]
print("Total number of cutomers: ", len(customer_index_list))
product_index_local = pd.DataFrame({'product_id': products.index,
'product_url': 'https://www.amazon.com/dp/'+products.index,
'item': np.arange(products.shape[0])})
all_predictions_from_user = pd.DataFrame(columns=['customer_id', 'product_id', 'product_url', 'prediction', 'product_title'])
for user_index in customer_index_list:
print("test_customer_index:", user_index)
product_index_local['prediction'] = trained_net(nd.array([7] * product_index_local.shape[0]).as_in_context(ctx),
nd.array(product_index_local['item'].values).as_in_context(ctx)).asnumpy()
product_index_local['customer_id'] = customer_index[customer_index['user'] == user_index]['customer_id'].values.tolist()[0]
# titles
titles = reduced_df.groupby('product_id')['product_title'].last().reset_index()
predictions_titles_local = product_index_local.merge(titles)
# product_category
product_category = reduced_df.groupby('product_id')['product_category'].last().reset_index()
predictions_catalogs_local = predictions_titles_local.merge(product_category)
# product_parent
product_parent = reduced_df.groupby('product_id')['product_parent'].last().reset_index()
predictions_catalogs_local = predictions_catalogs_local.merge(product_parent)
predictions_titles_local = predictions_catalogs_local.sort_values(['prediction', 'product_id'], ascending=[False, True])
#combine all results
all_predictions_from_user = pd.concat([all_predictions_from_user, predictions_titles_local])
all_predictions_from_user = pd.concat([all_predictions_from_user, predictions_catalogs_local])
# select top 50 recommeded product
predictions_titles_local = predictions_titles_local.head(n=50)
#reset index
all_predictions_from_user = all_predictions_from_user.reset_index(drop=True)
all_predictions_from_user = all_predictions_from_user[['customer_id', 'product_id', 'product_url', 'prediction', 'product_title', 'product_category', 'product_parent']]
# #generate csv file
all_predictions_from_user.to_csv("./Apparel_Jewelry_Shoes_predictions_from_user.csv")
# #generate pickle file
all_predictions_from_user.to_csv("./Apparel_Jewelry_Shoes_predictions_from_user.pickle")
all_predictions_from_user.head(n=10)
predictions_titles_local.columns
!ls -alrt Apparel_Jewelry_Shoes_predictions_from_user*
!aws s3 cp Apparel_Jewelry_Shoes_predictions_from_user.csv s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_predictions_from_user.csv
!aws s3 cp Apparel_Jewelry_Shoes_predictions_from_user.pickle s3://dse-cohort5-group1/1_Prediction_results/Apparel_Jewelry_Shoes/data/Apparel_Jewelry_Shoes_predictions_from_user.pickle
```
---
## Train with SageMaker
Now that we've trained on this smaller dataset, we can expand training in SageMaker's distributed, managed training environment.
### Wrap Code
To use SageMaker's pre-built MXNet container, we'll need to wrap our code from above into a Python script. There's a great deal of flexibility in using SageMaker's pre-built containers, and detailed documentation can be found [here](https://github.com/aws/sagemaker-python-sdk#mxnet-sagemaker-estimators), but for our example, it consisted of:
1. Wrapping all data preparation into a `prepare_train_data` function (we could name this whatever we like)
1. Copying and pasting classes and functions from above word-for-word
1. Defining a `train` function that:
1. Adds a bit of new code to pick up the input TSV dataset on the SageMaker Training cluster
1. Takes in a dict of hyperparameters (which we specified as globals above)
1. Creates the net and executes training
---
## !!! You have to apploy all above code change into recommender.py
- check recommender.py code before run
```
# !cat recommender.py
```
### Test Locally
Now we can test our train function locally. This helps ensure we don't have any bugs before submitting our code to SageMaker's pre-built MXNet container.
```
!ls -al /tmp/recsys/
%%time
import recommender
local_test_net, local_customer_index, local_product_index = recommender.train(
{'train': '/tmp/recsys/'},
{'num_embeddings': 64,
'opt': 'sgd',
'lr': 0.02,
'momentum': 0.9,
'wd': 0.,
'epochs': 2},
['local'],
1)
```
### Move Data
Holding our data in memory works fine when we're interactively exploring a sample of data, but for larger, longer running processes, we'd prefer to run them in the background with SageMaker Training. To do this, let's move the dataset to S3 so that it can be picked up by SageMaker training. This is perfect for use cases like periodic re-training, expanding to a larger dataset, or moving production workloads to larger hardware.
```
# change log level
import logging
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler()) # Writes to console
logger.setLevel(logging.DEBUG)
logging.getLogger('boto3').setLevel(logging.CRITICAL)
logging.getLogger('botocore').setLevel(logging.CRITICAL)
logging.getLogger('s3transfer').setLevel(logging.CRITICAL)
logging.getLogger('urllib3').setLevel(logging.CRITICAL)
bucket
prefix
# amazon_reviews_us_Shoes_v1_00
boto3.client('s3').copy({'Bucket': 'amazon-reviews-pds',
'Key': 'tsv/amazon_reviews_us_Shoes_v1_00.tsv.gz'},
bucket,
prefix + '/train/amazon_reviews_us_Shoes_v1_00.tsv.gz')
```
### Submit
Now, we can create an MXNet estimator from the SageMaker Python SDK. To do so, we need to pass in:
1. Instance type and count for our SageMaker Training cluster. SageMaker's MXNet containers support distributed GPU training, so we could easily set this to multiple ml.p2 or ml.p3 instances if we wanted.
- *Note, this would require some changes to our recommender.py script as we would need to setup the context an key value store properly, as well as determining if and how to distribute the training data.*
1. An S3 path for out model artifacts and a role with access to S3 input and output paths.
1. Hyperparameters for our neural network. Since with a 64 dimensional embedding, our recommendations reverted too closely to the mean, let's increase this by an order of magnitude when we train outside of our local instance. We'll also increase the epochs to see how our accuracy evolves over time. We'll leave all other hyperparameters the same.
Once we use `.fit()` this creates a SageMaker Training Job that spins up instances, loads the appropriate packages and data, runs our `train` function from `recommender.py`, wraps up and saves model artifacts to S3, and finishes by tearing down the cluster.
```
print( 's3://{}/{}/train/'.format(bucket, prefix))
!aws s3 ls s3://dse-cohort5-group1/sagemaker/amazon_reviews_us_Shoes_v1_00/train/
```
## Use ml.p3.8xlarge for training
```
2020-05-16 06:28:03 Completed - Training job completed
Training seconds: 203
Billable seconds: 203
```
```
# # Set optimization parameters
# opt = 'sgd'
# lr = 0.02
# momentum = 0.9
# wd = 0.
m = MXNet('recommender.py',
py_version='py3',
role=role,
train_instance_count=1,
train_instance_type="ml.p3.8xlarge",
output_path='s3://{}/{}/output'.format(bucket, prefix),
hyperparameters={'num_embeddings': 512,
'opt': opt,
'lr': lr,
'momentum': momentum,
'wd': wd,
'epochs': 50},
framework_version='1.1')
m.fit({'train': 's3://{}/{}/train/'.format(bucket, prefix)})
print(net.summary)
```
---
# 1. Hyperparameter Tune using Sagemaker Hyperparameter tune
### working log - JH
### [05/17/2020] need to apply my metric for this model to use Sagemaker hyperparameter. But I don't know how to apply cusome metric to Couldwatch
Similar to training a single MXNet job in SageMaker, we define our MXNet estimator passing in the MXNet script, IAM role, (per job) hardware configuration, and any hyperparameters we're not tuning.
```
# estimator = MXNet('recommender.py',
# py_version='py3',
# role=role,
# train_instance_count=1,
# train_instance_type="ml.p3.8xlarge",
# output_path='s3://{}/{}/output'.format(bucket, prefix),
# base_job_name='Amazon-recomender-hpo-mxnet',
# hyperparameters={'num_embeddings': 512,
# 'opt': opt,
# 'lr': lr,
# 'momentum': momentum,
# 'wd': wd,
# 'epochs': 50},
# framework_version='1.4.1')
```
Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.
- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`
- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`
- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`
*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning `thresh` as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with possible values of 0.01, 0.1, 0.15, or 0.2.*
```
# hyperparameter_ranges = {'optimizer': CategoricalParameter(['sgd', 'Adam']),
# 'learning_rate': ContinuousParameter(0.01, 0.2),
# 'momentum': ContinuousParameter(0., 0.99),
# 'wd': ContinuousParameter(0., 0.001),
# 'num_epoch': IntegerParameter(10, 50)}
```
Next we'll specify the objective metric that we'd like to tune and its definition. This includes the regular expression (Regex) needed to extract that metric from the CloudWatch logs of our training job.
```
# objective_metric_name = 'MSE-ON-TEST'
# metric_definitions = [{'Name': 'MSE-ON-TEST',
# 'Regex': 'MSE-ON-TEST=([0-9\\.]+)'}]
# # # THE SCORING METRIC TO MAXIMIZE
# # objective_metric_name = 'Validation-accuracy'
# # metric_definitions = [{'Name': 'Validation-accuracy',
# # 'Regex': 'validation: accuracy=([0-9\\.]+)'}]
# # objective_metric_name = 'loss'
# # metric_definitions = [{'Name': 'loss',
# # 'Regex': 'Loss = (.*?);'}]
```
Now we can create a `HyperparameterTuner` object and fit it by pointing to our data in S3. This kicks our tuning job off in the background.
Notice, we specify a much smaller number of total jobs, and a smaller number of parallel jobs. Since our model uses previous training job runs to predict where to test next, we get better results (although it takes longer) when setting this to a smaller value.
```
# tuner = HyperparameterTuner(estimator,
# objective_metric_name,
# hyperparameter_ranges,
# metric_definitions,
# max_jobs=2,
# max_parallel_jobs=1)
```
And finally, we can start our tuning job by calling `.fit()` and passing in the S3 paths to our train and test datasets.
```
# tuner.fit(train_iter={'train': 's3://{}/{}/train/'.format(bucket, prefix)})
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is `InProgress`.
_You will be unable to successfully run the following cells until the tuning job completes. This step may take up to 2 hours._
Once the tuning job finishes, we can bring in a table of metrics.
```
# bayes_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuner._current_job_name).dataframe()
# bayes_metrics.sort_values(['FinalObjectiveValue'], ascending=False)
```
Looking at our results, we can see that, with one fourth the total training jobs, SageMaker's Automatic Model Tuning has produced a model with better accuracy 74% than our random search. In addition, there's no guarantee that the effectiveness of random search wouldn't change over subsequent runs.
Let's compare our hyperparameter's relationship to eachother and the objective metric.
```
# pd.plotting.scatter_matrix(pd.concat([bayes_metrics[['FinalObjectiveValue',
# 'learning_rate',
# 'momentum',
# 'wd']],
# bayes_metrics['TrainingStartTime'].rank()],
# axis=1),
# figsize=(12, 12))
# plt.show()
```
As we can see, our accuracy is only about 53% on our validation dataset. CIFAR-10 can be challenging, but we'd want our accuracy much better than just over half if users are depending on an accurate prediction.
---
# 2. Tune: Random
One method of hyperparameter tuning that performs surprisingly well for how simple it is, is randomly trying a variety of hyperparameter values within set ranges. So, for this example, we've created a helper script `random_tuner.py` to help us do this.
We'll need to supply:
* A function that trains our MXNet model given a job name and list of hyperparameters. Note, `wait` is set to false in our `fit()` call so that we can train multiple jobs at once.
* A dictionary of hyperparameters where the ones we want to tune are defined as one of three types (`ContinuousParameter`, `IntegerParameter`, or `CategoricalParameter`) and appropriate minimum and maximum ranges or a list of possible values are provided.
```
# inputs = {'train': 's3://{}/{}/train/'.format(bucket, prefix)}
# inputs
# def fit_random(job_name, hyperparameters):
# m = MXNet('recommender.py',
# py_version='py3',
# sagemaker_session=sagemaker.Session(),
# role=role,
# train_instance_count=1,
# train_instance_type="ml.p2.8xlarge",
# framework_version='1.4.1',
# base_job_name='Amazon-hpo-mxnet-0516',
# hyperparameters=hyperparameters
# )
# inputs = {'train': 's3://{}/{}/train/'.format(bucket, prefix)}
# print("input for hyperparameter tunning: ", inputs)
# m.fit(inputs, wait=False, job_name=job_name)
# {'num_embeddings': [64, 128]
# 'opt': ['sgd', 'adam']
# 'lr': 0.02,
# 'momentum': 0.9,
# 'wd': 0.,
# 'epochs': 10}
# # for Test
# hyperparameters = {'batch_size': 1024,
# 'epochs': 2,
# 'learning_rate': rt.ContinuousParameter(0.001, 0.5),
# 'momentum': rt.ContinuousParameter(0., 0.99),
# 'wd': rt.ContinuousParameter(0., 0.001)}
# hyperparameters = {'batch_size': rt.CategoricalParameter([1024]),
# 'num_embeddings': rt.CategoricalParameter([64, 128]),
# 'opt': rt.CategoricalParameter(['sgd', 'Adam']),
# 'epochs': 50,
# 'learning_rate': rt.ContinuousParameter(0.001, 0.5),
# 'momentum': rt.ContinuousParameter(0., 0.99),
# 'wd': rt.ContinuousParameter(0., 0.001)}
```
Next, we can kick off our random search. We've defined the total number of training jobs to be 120. This is a large amount and drives most of the cost of this notebook. Also, we've specified up to 8 jobs to be run in parallel. This exceeds the default concurrent instance limit for ml.p3.8xlarge instances. If you're just testing this notebook out, decreasing both values will control costs and allow you to complete successfully without requiring a service limit increase.
_Note, this step may take up to 2 hours to complete. Even if you loose connection with the notebook in the middle, as long as the notebook instance continues to run, `jobs` should still be successfully created for future use._
```
# %%time
# '''
# Runs random search for hyperparameters.
# Takes in:
# train_fn: A function that kicks off a training job based on two positional arguments-
# job name and hyperparameter dictionary. Note, wait must be set to False if using .fit()
# hyperparameters: A dictonary of hyperparameters defined with hyperparameter classes.
# base_name: Base name for training jobs. Defaults to 'random-hp-<timestamp>'.
# max_jobs: Total number of training jobs to run.
# max_parallel_jobs: Most training jobs to run concurrently. This does not affect the quality
# of search, just helps stay under account service limits.
# Returns a dictionary of max_jobs job names with associated hyperparameter values.
# '''
# jobs = rt.random_search(fit_random,
# hyperparameters,
# max_jobs=120,
# max_parallel_jobs=1)
```
Once our random search completes, we'll want to compare our training jobs (which may take a few extra minutes to finish) in order to understand how our objective metric (% accuracy on our validation dataset) varies by hyperparameter values. In this case, our helper function includes two functions.
* `get_metrics()` scrapes the CloudWatch logs for our training jobs and uses a regex to return any reported values of our objective metric.
* `table_metrics()` joins on the hyperparameter values for each job, grabs the ending objective value, and converts the result to a Pandas DataFrame.
```
# random_metrics = rt.table_metrics(jobs, rt.get_metrics(jobs, 'validation: accuracy=([0-9\\.]+)'))
# random_metrics.sort_values(['objective'], ascending=False)
```
As we can see, there's a huge variation in percent accuracy. Had we initially (unknowingly) set our learning rate near 0.5, momentum at 0.15, and weight decay to 0.0004, we would have an accuracy just over 20% (this is particularly bad considering random guessing would produce 10% accuracy).
But, we also found many successful hyperparameter value combinations, and reached a peak validation accuracy of 73.7%. Note, this peak job occurs relatively early in our search but, due to randomness, our next best objective value occurred 89 jobs later. The actual peak could have occurred anywhere within the 120 jobs and will change across multiple runs. We can see that with hyperparameter tuning our accuracy is well above the default value baseline of 53%.
To get a rough understanding of how the hyperparameter values relate to one another and the objective metric, let's quickly plot them.
```
# pd.plotting.scatter_matrix(random_metrics[['objective',
# 'learning_rate',
# 'momentum',
# 'wd',
# 'job_number']],
# figsize=(12, 12))
# plt.show()
m
```
---
# Deploying to Sagemaker Endpoint - Host
### Now that we've trained our model, deploying it to a real-time, production endpoint is easy.
```
predictor = m.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
predictor.serializer = None
```
Now that we have an endpoint, let's test it out. We'll predict user #6's ratings for the top and bottom ASINs from our local model.
*This could be done by sending HTTP POST requests from a separate web service, but to keep things easy, we'll just use the `.predict()` method from the SageMaker Python SDK.*
```
predictor.predict(json.dumps({'customer_id': customer_index[customer_index['user'] == 6]['customer_id'].values.tolist(),
'product_id': ['B00HSJRT7I', 'B001FA1O1S']}))
```
*Note, some of our predictions are actually greater than 5, which is to be expected as we didn't do anything special to account for ratings being capped at that value. Since we are only looking to ranking by predicted rating, this won't create problems for our specific use case.*
```
json.dumps({'customer_id': customer_index[customer_index['user'] == 6]['customer_id'].values.tolist(), 'product_id': ['B00HSJRT7I', 'B001FA1O1S']})
reduced_df.sample(n=2)
```
## - Checking the Endpoint Status
```
# !aws sagemaker describe-endpoint --endpoint-name sagemaker-mxnet-2020-05-18-03-55-49-266
type(predictor)
```
### Evaluate
Let's start by calculating a naive baseline to approximate how well our model is doing. The simplest estimate would be to assume every user item rating is just the average rating over all ratings.
*Note, we could do better by using each individual product's average, however, in this case it doesn't really matter as the same conclusions would hold.*
```
print('Naive MSE:', np.mean((test_df['star_rating'] - np.mean(train_df['star_rating'])) ** 2))
```
Now, we'll calculate predictions for our test dataset.
*Note, this will align closely to our CloudWatch output above, but may differ slightly due to skipping partial mini-batches in our eval_net function.*
```
test_preds = []
for array in np.array_split(test_df[['customer_id', 'product_id']].values, 40):
test_preds += predictor.predict(json.dumps({'customer_id': array[:, 0].tolist(),
'product_id': array[:, 1].tolist()}))
test_preds = np.array(test_preds)
print('MSE:', np.mean((test_df['star_rating'] - test_preds) ** 2))
```
We can see that our neural network and embedding model produces substantially better results (~1.27 vs 1.65 on mean square error).
For recommender systems, subjective accuracy also matters. Let's get some recommendations for a random user to see if they make intuitive sense.
```
reduced_df[reduced_df['user'] == 6].sort_values(['star_rating', 'item'], ascending=[False, True])
reduced_df.to_csv("./amazon_reviews_us_Shoes_v1_00.csv")
```
As we can see, user #6 seems to like sprawling dramamtic television series and sci-fi, but they dislike silly comedies.
Now we'll loop through and predict user #6's ratings for every common product in the catalog, to see which ones we'd recommend and which ones we wouldn't.
```
predictions = []
for array in np.array_split(product_index['product_id'].values, 40):
predictions += predictor.predict(json.dumps({'customer_id': customer_index[customer_index['user'] == 6]['customer_id'].values.tolist() * array.shape[0],
'product_id': array.tolist()}))
predictions = pd.DataFrame({'product_id': product_index['product_id'],
'prediction': predictions})
predictions_user6 = []
for array in np.array_split(product_index['product_id'].values, 40):
predictions_user6 += predictor.predict(json.dumps({'customer_id': customer_index[customer_index['user'] == 6]['customer_id'].values.tolist() * array.shape[0],
'product_id': array.tolist()}))
plt.scatter(predictions['prediction'], np.array(predictions_user6))
plt.show()
titles = reduced_df.groupby('product_id')['product_title'].last().reset_index()
predictions_titles = predictions.merge(titles)
predictions_titles = predictions_titles.sort_values(['prediction', 'product_id'], ascending=[False, True])
# pickup top 100 recommeded products only
predictions_titles = predictions_titles.head(n=100)
predictions_titles.head(n=10)
```
Indeed, our predicted highly rated shows have some well-reviewed TV dramas and some sci-fi. Meanwhile, our bottom rated shows include goofball comedies.
*Note, because of random initialization in the weights, results on subsequent runs may differ slightly.*
Let's confirm that we no longer have almost perfect correlation in recommendations with user #7.
```
predictions_titles.to_csv("./user_6_amazon_reviews_us_Shoes_v1_00.csv")
```
# Predict for all users
```
# find user_id from user index
user_index = 3
customer_index[customer_index['user'] == user_index]['customer_id'].values.tolist()
customer_index_list = customer_index['user'].tolist()
print("Total number of cutomers: ", len(customer_index_list))
# test
# customer_index_list=[1,2,3]
all_predictions_from_user = pd.DataFrame(columns=['customer_id', 'product_id', 'prediction', 'product_title'])
for user_index in customer_index_list:
print("test_customer_index:", user_index)
predictions = []
for array in np.array_split(product_index['product_id'].values, 40):
predictions += predictor.predict(json.dumps({'customer_id': customer_index[customer_index['user'] == user_index]['customer_id'].values.tolist() * array.shape[0],
'product_id': array.tolist()}))
customer_id = customer_index[customer_index['user'] == user_index]['customer_id'].values.tolist()
print("customer_id: ",customer_id[0])
predictions = pd.DataFrame({
'product_id': product_index['product_id'],
'prediction': predictions})
predictions['customer_id'] = customer_id[0]
predictions = predictions[['customer_id', 'product_id', 'prediction']]
# print(predictions.head(n=2))
titles = reduced_df.groupby('product_id')['product_title'].last().reset_index()
predictions_titles = predictions.merge(titles)
predictions_titles = predictions_titles.sort_values(['prediction', 'product_id'], ascending=[False, True])
# pickup top 100 recommeded products only
predictions_titles = predictions_titles.head(n=100)
print(predictions_titles.head(n=1))
#combine all results
all_predictions_from_user = pd.concat([all_predictions_from_user, predictions_titles])
#reset index
all_predictions_from_user = all_predictions_from_user.reset_index(drop=True)
#generate csv file
all_predictions_from_user.to_csv("Shoes_all_predictions_from_user.csv")
#generate pickle file
all_predictions_from_user.to_csv("Shoes_all_predictions_from_user.pickle")
all_predictions_from_user.head(n=10)
```
---
## Wrap-up
In this example, we developed a deep learning model to predict customer ratings. This could serve as the foundation of a recommender system in a variety of use cases. However, there are many ways in which it could be improved. For example we did very little with:
- hyperparameter tuning
- controlling for overfitting (early stopping, dropout, etc.)
- testing whether binarizing our target variable would improve results
- including other information sources (historical ratings, time of review)
- adjusting our threshold for user and item inclusion
In addition to improving the model, we could improve the engineering by:
- Setting the context and key value store up for distributed training
- Fine tuning our data ingestion (e.g. num_workers on our data iterators) to ensure we're fully utilizing our GPU
- Thinking about how pre-processing would need to change as datasets scale beyond a single machine
Beyond that, recommenders are a very active area of research and techniques from active learning, reinforcement learning, segmentation, ensembling, and more should be investigated to deliver well-rounded recommendations.
### Clean-up (optional)
Let's finish by deleting our endpoint to avoid stray hosting charges.
```
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
## Reference APID
- MXNet Estimator
- https://sagemaker.readthedocs.io/en/stable/sagemaker.mxnet.html#mxnet-estimator
| github_jupyter |
# Python Keywords and Identifier
You will learn about keywords (reserved words in Python) and identifiers (name given to variables, functions etc).
Table of Contents
1. Python Keywords
2. Python Identifiers
3. Rules for writing identifiers
4. Things to care about
# Python Keywords
Keywords are the reserved words in Python.
We cannot use a keyword as variable name, function name or any other identifier. They are used to define the syntax and structure of the Python language.
In Python, keywords are case sensitive.
There are 33 keywords in Python 3.3. This number can vary slightly in course of time.
All the keywords except True, False and None are in lowercase and they must be written as it is. The list of all the keywords are given below.Python Keywords
Keywords are the reserved words in Python.
We cannot use a keyword as variable name, function name or any other identifier. They are used to define the syntax and structure of the Python language.
In Python, keywords are case sensitive.
There are 33 keywords in Python 3.3. This number can vary slightly in course of time.
All the keywords except True, False and None are in lowercase and they must be written as it is. The list of all the keywords are given below.
```
from IPython.display import Image
Image(filename='img/keywords.jpg')
```
Looking at all the keywords at once and trying to figure out what they mean might be overwhelming.
If you want to have an overview, here is the complete list of all the keywords with examples.
# Python Identifiers
Identifier is the name given to entities like class, functions, variables etc. in Python. It helps differentiating one entity from another.
# Rules for writing identifiers
1. Identifiers can be a combination of letters in lowercase (a to z) or uppercase (A to Z) or digits (0 to 9) or an underscore (_). Names like myClass, var_1 and print_this_to_screen, all are valid example.
2. An identifier cannot start with a digit. 1variable is invalid, but variable1 is perfectly fine.
3. Keywords cannot be used as identifiers.
```
global = 1
a@ = 0
```
4. We cannot use special symbols like !, @, #, $, % etc. in our identifier.
5. Identifier can be of any length.
# Things to care about
Python is a case-sensitive language. This means, Variable and variable are not the same. Always name identifiers that make sense.
While, c = 10 is valid. Writing count = 10 would make more sense and it would be easier to figure out what it does even when you look at your code after a long gap.
Multiple words can be separated using an underscore, this_is_a_long_variable.
We can also use camel-case style of writing, i.e., capitalize every first letter of the word except the initial word without any spaces. For example: camelCaseExample
| github_jupyter |
<img src="images/strathsdr_banner.png" align="left">
# RFSoC QPSK Transceiver with Voila
----
<div class="alert alert-box alert-info">
Please use Jupyter Labs http://board_ip_address/lab for this notebook.
</div>
The RFSoC QPSK demonstrator was developed by the [University of Strathclyde](https://github.com/strath-sdr/rfsoc_sam). This notebook is specifically for Voila dashboards. If you would like to see an overview of the QPSK demonstrator, see this [notebook](rfsoc_qpsk_demonstrator.ipynb) instead.
## Table of Contents
* [Introduction](#introduction)
* [Running this Demonstration](#running-this-demonstration)
* [The Voila Procedure](#the-voila-procedure)
* [Import Libraries](#import-libraries)
* [Initialise Overlay](#initialise-overlay)
* [Dashboard Display](#dashboard-display)
* [Conclusion](#conclusion)
## References
* [Xilinx, Inc, "USP RF Data Converter: LogiCORE IP Product Guide", PG269, v2.3, June 2020](https://www.xilinx.com/support/documentation/ip_documentation/usp_rf_data_converter/v2_3/pg269-rf-data-converter.pdf)
## Revision History
* **v1.0** | 16/02/2021 | Voila RFSoC QPSK demonstrator
----
## Introduction <a class="anchor" id="introduction"></a>
The ZCU111 platform and XM500 development board can be configured as a simple QPSK transceiver. The RFSoC QPSK demonstrator uses the RFSoC's RF Data Converters (RF DCs) to transmit and receive QPSK modulated waveforms. This notebook is specifically for running the QPSK demonstrator using Voila dashboards. Follow the instructions outlined in [Running this Demonstration](#running-this-demonstration) to learn more.
### Hardware Setup <a class="anchor" id="hardware-setup"></a>
Your RFSoC2x2 development board can be configured to host one QPSK transceiver channel. To setup your board for this demonstration, you can connect a channel in loopback as shown in [Figure 1](#fig-1).
The default loopback configuration is connected as follows:
* Channel 0: DAC2 to ADC2
Use the image below for further guidance.
<a class="anchor" id="fig-1"></a>
<figure>
<img src='images/rfsoc2x2_setup.jpg' height='50%' width='50%'/>
<figcaption><b>Figure 1: RFSoC2x2 development board setup in loopback mode.</b></figcaption>
</figure>
**Do not** attach an antenna to any SMA interfaces labelled DAC.
<div class="alert alert-box alert-danger">
<b>Caution:</b>
In this demonstration, we generate tones using the RFSoC development board. Your device should be setup in loopback mode. You should understand that the RFSoC platform can also transmit RF signals wirelessly. Remember that unlicensed wireless transmission of RF signals may be illegal in your geographical location. Radio signals may also interfere with nearby devices, such as pacemakers and emergency radio equipment. Note that it is also illegal to intercept and decode particular RF signals. If you are unsure, please seek professional support.
</div>
----
## Running this Demonstration <a class="anchor" id="running-this-demonstration"></a>
Voila can be used to execute the QPSK demonstrator and ignore all of the markdown and code cells typically found in a normal Jupyter notebook. The Voila dashboard can be launched following the instructions below:
* Open the Jupyter Quick Launch window as below:
<figure>
<img src='images/open_jupyter_launcher.jpg' height='25%' width='25%'/>
</figure>
* Open a terminal window.
<figure>
<img src='images/open_terminal_window.jpg' height='30%' width='30%'/>
</figure>
* Start a Voila session by running the command below in the terminal (just copy and paste it into the terminal):
```bash
voila /home/xilinx/jupyter_notebooks/qpsk-demonstrator/voila_rfsoc_qpsk_demonstrator.ipynb --ExecutePreprocessor.timeout=180 --theme=dark --port=8866 --TagRemovePreprocessor.remove_cell_tags='{"ignore_me"}'
```
* You can now open a new browser tab and enter the following into the address bar: http://board_ip_address:8866
After you open the new tab at the address above, the kernel will start and the notebook will run. Only the QPSK demonstrator will be displayed. The initialisation process takes around 2 minutes.
## The Voila Procedure <a class="anchor" id="the-voila-procedure"></a>
Below are the code cells that will be ran when Voila is called. The procedure is fairly straight forward. Load the rfsoc-qpsk library, initialise the overlay, and display the QPSK demonstrator. All you have to ensure is that the above command is executed in the terminal and you have launched a browser tab using the given address. You do not need to run these code cells individually to create the voila dashboard.
### Import Libraries
```
from rfsoc_qpsk.qpsk_overlay import QpskOverlay
```
### Initialise Overlay
```
qpsk = QpskOverlay(dark_theme=True)
```
### Dashboard Display
```
qpsk.qpsk_demonstrator_application()
```
## Conclusion
This notebook has presented a QPSK demonstrator for the ZCU111 development board. The demonstration used Voila to enable rapid dashboarding for visualisation and control.
| github_jupyter |
# 3. Train-Predict-LogLoss
**Tensorboard**
- Input at command: tensorboard --logdir=./log
- Input at browser: http://127.0.0.1:6006
```
import time
import os
import pandas as pd
project_name = 'Dog_Breed_Identification'
step_name = 'Train'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
cwd = os.getcwd()
model_path = os.path.join(cwd, 'model')
print('model_path: ' + model_path)
import h5py
import numpy as np
from sklearn.utils import shuffle
np.random.seed(2017)
x_train = []
y_train = {}
x_val = []
y_val = {}
x_test = []
cwd = os.getcwd()
feature_cgg16 = os.path.join(cwd, 'model', 'feature_VGG16_{}.h5'.format(171023))
feature_cgg19 = os.path.join(cwd, 'model', 'feature_VGG19_{}.h5'.format(171023))
feature_resnet50 = os.path.join(cwd, 'model', 'feature_ResNet50_{}.h5'.format(171023))
feature_xception = os.path.join(cwd, 'model', 'feature_Xception_{}.h5'.format(171023))
feature_inception = os.path.join(cwd, 'model', 'feature_InceptionV3_{}.h5'.format(171023))
for filename in [feature_cgg16, feature_cgg19, feature_resnet50, feature_xception, feature_inception]:
with h5py.File(filename, 'r') as h:
x_train.append(np.array(h['train']))
y_train = np.array(h['train_label'])
x_val.append(np.array(h['val']))
y_val = np.array(h['val_label'])
x_test.append(np.array(h['test']))
# print(x_train[0].shape)
x_train = np.concatenate(x_train, axis=-1)
# y_train = np.concatenate(y_train, axis=0)
x_val = np.concatenate(x_val, axis=-1)
# y_val = np.concatenate(y_val, axis=0)
x_test = np.concatenate(x_test, axis=-1)
print(x_train.shape)
print(x_train.shape[1:])
print(len(y_train))
print(x_val.shape)
print(len(y_val))
print(x_test.shape)
from sklearn.utils import shuffle
(x_train, y_train) = shuffle(x_train, y_train)
# from keras.utils.np_utils import to_categorical
# y_train = to_categorical(y_train)
# y_val = to_categorical(y_val)
# print(y_train.shape)
# print(y_val.shape)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(multi_class='multinomial', solver='lbfgs', random_state=2017)
logreg.fit(x_train, y_train)
val_proba = logreg.predict_proba(x_val)
val_preds = logreg.predict(x_val)
print(val_proba.shape)
print(val_preds.shape)
print(val_proba[:,1].shape)
print(y_val.shape)
from keras.utils.np_utils import to_categorical
print(val_proba[0])
print(y_val[0])
log_loss_y_val = to_categorical(y_val)
print(log_loss_y_val[0])
from sklearn.metrics import log_loss, accuracy_score
print('Val log_loss: {}'.format(log_loss(log_loss_y_val, val_proba)))
val_proba_limit = np.clip(log_loss_y_val, 0.005, 0.995)
print('Val log_loss: {}'.format(log_loss(log_loss_y_val, val_proba_limit)))
print('Val accuracy_score: {}'.format(accuracy_score(y_val, val_preds)))
```
## Predict
```
# Used to load model directly and skip train
# import os
# from keras.models import load_model
# cwd = os.getcwd()
# model = load_model(os.path.join(cwd, 'model', 'Dog_Breed_Identification_Train_20171024_155154.h5'))
y_pred = logreg.predict_proba(x_test)
print(y_pred.shape)
print(y_pred[:10])
y_pred = np.clip(y_pred, 0.005, 0.995)
print(y_pred[:10])
files = os.listdir(os.path.join(cwd, 'input', 'data_test', 'test'))
print(files[:10])
cwd = os.getcwd()
df = pd.read_csv(os.path.join(cwd, 'input', 'labels.csv'))
print('lables amount: %d' %len(df))
df.head()
n = len(df)
breed = set(df['breed'])
n_class = len(breed)
class_to_num = dict(zip(breed, range(n_class)))
num_to_class = dict(zip(range(n_class), breed))
print(breed)
df2 = pd.read_csv('.\\input\\sample_submission.csv')
n_test = len(df2)
print(df2.shape)
for i in range(0, 120):
df2.iloc[:,[i+1]] = y_pred[:,i]
df2.to_csv('.\\output\\pred.csv', index=None)
print('Done !')
```
| github_jupyter |
# Inheritance
Inheritance models what is called an **is a** relationship. This means that when you have a Derived class that inherits from a Base class, you created a relationship where Derived is a specialized version of Base.
+ Single Inheritance
+ Multiple Inheritance
+ Multilevel Inheritance
+ Hierarchical Inheritance
+ Hybrid Inheritance
### Single Inheritance
Single class inherits from a parent class.
```
class Person:
def __init__(self,name):
self.name = name
def sayName(self):
print(self.name)
def sayProfession(self):
print(self.profession)
```
#### Super Function
Used to call a method from parent class
```
class Engineer(Person):
def __init__(self,name):
super().__init__(name)
self.profession ='Engineer'
class Doctor(Person):
def __init__(self,name):
super().__init__(name)
self.profession ='Doctor'
engineer = Engineer('Ansu')
engineer.sayName()
engineer.sayProfession()
doctor = Doctor('Jane')
doctor.sayName()
doctor.sayProfession()
```
#### dir
dir() returns a list of all the members in the specified object.
If you list all members of newly created object and compare them against members of object class, you can see that the two lists are nearly identical. There are some additional members in MyClass like __dict__ and __weakref__, but every single member of the object class is also present in engineer class.
This is because every class you create in Python implicitly derives from object. You could be more explicit and write class Person(object):, but it’s redundant and unnecessary.
```
dir(engineer)
o = object()
dir(o)
```
Python has two built-in functions that work with inheritance:
+ Use **isinstance()** to check an instance’s type: isinstance(obj, int) will be True only if obj.__class__ is int or some class derived from int.
+ Use **issubclass()** to check class inheritance: issubclass(bool, int) is True since bool is a subclass of int. However, issubclass(float, int) is False since float is not a subclass of int.
```
print(isinstance(doctor,Person))
print(isinstance(Doctor,Person))
print(issubclass(Engineer,Person))
print(issubclass(doctor,Person))
```
### Multiple Inheritance
Single class inherits from multiple parent classes.
```
class A:
def printA(self):
print("From A")
class B:
def printB(self):
print("From B")
class C(A, B):
def printC(self):
print("From C")
obj = C()
obj.printC()
obj.printB()
obj.printA()
```
### Multilevel Inheritance
One class inherits from a parent classes, which will inherit from another class.
```
class Base(object):
# Constructor
def __init__(self, name):
self.name = name
# To get name
def getName(self):
return self.name
# Inherited or Sub class (Note Person in bracket)
class Child(Base):
# Constructor
def __init__(self, name, age):
Base.__init__(self, name)
self.age = age
# To get name
def getAge(self):
return self.age
# Inherited or Sub class (Note Person in bracket)
class GrandChild(Child):
# Constructor
def __init__(self, name, age, address):
Child.__init__(self, name, age)
self.address = address
# To get address
def getAddress(self):
return self.address
# Driver code
g = GrandChild("Ansu", 38, "Singapore")
print(g.getName(), g.getAge(), g.getAddress())
```
#### Method Resolution Order (MRO)
Method Resolution Order (MRO) is the order in which Python looks for a method in a hierarchy of classes. Especially it plays vital role in the context of multiple inheritance as single method may be found in multiple super classes.
```
print(GrandChild.mro())
```
### Hierarchical Inheritance
More than one derived classes are created from a single base.
```
class A:
pass
class B(A):
pass
class C(A):
pass
print(issubclass(B,A))
print(issubclass(C,A))
print(issubclass(C,B))
```
### Hybrid Inheritance
This form combines more than one form of inheritance. Basically, it is a blend of more than one type of inheritance.
```
class A:
def process(self):
print('A process()')
class B:
def process(self):
print('B process()')
class C(A, B):
def process(self):
print('C process()')
class D(C):
def process(self):
print('D process()')
class E(D,B):
pass
print(E.mro())
obj = E()
obj.process()
```
## Overloading
**Overloading, in the context of programming, refers to the ability of a function or an operator to behave in different ways depending on the parameters that are passed to the function, or the operands that the operator acts on.**
Depending on how the function has been defined, we can call it with zero, one, two, or even many parameters. This is referred to as "function overloading".
Function overloading is further divided into two types: **overloading built-in functions** and **overloading custom functions**.
To overload a user-defined function in Python, we need to write the function logic in such a way that depending upon the parameters passed, a different piece of code executes inside the function.
### Overloading user-defined function
```
class Student:
def hello(self,name = None):
if name is not None:
print('Hey ' + name)
else:
print('Hey')
std = Student()
std.hello()
std.hello('Nicholas')
class Base:
def add(self, *args):
result = 0
for x in args:
result += x
return result
base = Base()
print(base.add(1,2))
print(base.add(1, 2, 3, 4, 5))
```
### Overloading built-in function
It is possible for us to change the default behavior of Python's built-in functions. We only have to define the corresponding special method in our class.
To change how the Python's len() function behaves, we defined a special method named _len_() in our class. Anytime we pass an object of our class to len(), the result will be obtained by calling our custom defined function, that is, _len_().
```
class Purchase:
def __init__(self, basket, buyer):
self.basket = list(basket)
self.buyer = buyer
print(len(basket)) # Python's len() function
def __len__(self):
return 10;
purchase = Purchase(['pen', 'book', 'pencil'], 'Python')
print(len(purchase)) # Overloaded len() function
class Point:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __sub__(self, other):
x = self.x + other.x
y = self.y + other.y
return Point(x,y)
p1 = Point(3, 4)
p2 = Point(1, 2)
result = p1-p2 # Overloaded - function
print(result.x, result.y)
```
## Overriding
Method overriding is an example of run time polymorphism. It is used to change the behavior of existing methods and there is a need for at least two classes for method overriding. In method overriding, inheritance always required as it is done between parent class(superclass) and child class(child class) methods.
```
class Base:
def add(self, a, b):
return a + b
class Derived(Base):
def add(self, a, b):
return a + b + 5
base = Base()
derived = Derived()
print(base.add(1,2))
print(derived.add(1,2))
```
| github_jupyter |
# t-SNE Grid for California Data
## Qualitative Evaluation of SimCLR Pretraining
## Irrigation Capstone - Fall 2020
### TP Goter
This notebook is used to first determine the latent space vectors for a set of California images. These latent space vectors are then transformed to two-dimension space using the tSNE methodology. We then plot these images in the two dimensions. This will show us how different images are arranged in latent space. If it works, it will show similar geographic images in certain regions.
```
import pandas as pd
%matplotlib inline
from matplotlib import pyplot as plt
import os
import tensorflow as tf
import numpy as np
from pprint import pprint
from tqdm import tqdm
import sklearn
from sklearn.manifold import TSNE
import sys
from PIL import Image
from bokeh.plotting import figure, show
from bokeh.io import output_notebook, output_file
import bokeh
import skimage
sys.path.append('/Users/tom/Desktop/MIDS_TPG/W210/capstone_fall20_irrigation/')
import utils
print(f'Pandas version: {pd.__version__}')
print(f'Numpy version: {np.__version__}')
print(f'sci-kit learn version: {sklearn.__version__}')
print(f'Tensorflow version: {tf.__version__}')
output_notebook()
model_path = '../BigEarthData/models'
# List the final Big Earth Net pretrained models
pprint([file for file in os.listdir(model_path) if 'simclr_100' in file])
# List the final CA pretrained models
pprint([file for file in os.listdir(model_path) if 'ca_simclr_' in file])
BAND_STATS = {
'mean': {
'B01': 340.76769064,
'B02': 429.9430203,
'B03': 614.21682446,
'B04': 590.23569706,
'B05': 950.68368468,
'B06': 1792.46290469,
'B07': 2075.46795189,
'B08': 2218.94553375,
'B8A': 2266.46036911,
'B09': 2246.0605464,
'B11': 1594.42694882,
'B12': 1009.32729131
},
'std': {
'B01': 554.81258967,
'B02': 572.41639287,
'B03': 582.87945694,
'B04': 675.88746967,
'B05': 729.89827633,
'B06': 1096.01480586,
'B07': 1273.45393088,
'B08': 1365.45589904,
'B8A': 1356.13789355,
'B09': 1302.3292881,
'B11': 1079.19066363,
'B12': 818.86747235
}
}
BAND_STATS_CA = {'mean': {'B02': 725.193505986188,
'B03': 1028.5459669514032,
'B04': 1258.9655400619445,
'B05': 1597.8028399130633,
'B06': 2170.0459291641573,
'B07': 2434.1251301748134,
'B08': 2613.2817721668257,
'B8A': 2672.539516996118,
'B11': 2833.482510348869,
'B12': 2104.7903924463503},
'std': {'B02': 416.6137845190807,
'B03': 499.6087245377614,
'B04': 693.5558604814064,
'B05': 640.6865473157832,
'B06': 676.3993986790316,
'B07': 795.1209667456519,
'B08': 839.6670833859841,
'B8A': 821.8303575104553,
'B11': 975.7944412326585,
'B12': 928.1875779697522}}
def generate_tsne_grid(model_path, saved_model, files, num_images, batch_size, ca_flag, bokeh_flag, label, output):
SCALE_FACTOR = 3000
def get_training_dataset(files, batch_size, ca_flag):
return utils.get_batched_dataset(files, batch_size, ca=ca_flag)
# Get the data
data = get_training_dataset(files, batch_size, ca_flag)
loaded_model = tf.keras.models.load_model(os.path.join(model_path, saved_model))
loaded_model.summary()
def denorm_img(img):
if ca_flag:
band_stats = BAND_STATS_CA
else:
band_stats = BAND_STATS
return np.stack([(img[:,:,0]* band_stats['std']['B04']+ band_stats['mean']['B04'])/ SCALE_FACTOR,
(img[:,:,1]* band_stats['std']['B03']+ band_stats['mean']['B03'])/ SCALE_FACTOR,
(img[:,:,2]* band_stats['std']['B02']+ band_stats['mean']['B02'])/ SCALE_FACTOR], axis=2)
def rgb_to_rgba32(img):
"""
Convert an RGB image to a 32 bit-encoded RGBA image.
"""
img = denorm_img(img)
# Ensure it has three channels
if len(img.shape) != 3 or img.shape[2] !=3:
raise RuntimeError('Input image is not RGB.')
# Get image shape
n, m, _ = img.shape
# Convert to 8-bit, which is expected for viewing
im_8 = np.uint8(img*255)
# Add the alpha channel, which is expected by Bokeh
im_rgba = np.dstack((im_8, 255*np.ones_like(im_8[:,:,0])))
# Reshape into 32 bit. Must flip up/down for proper orientation
return np.flipud(im_rgba.view(dtype=np.int32).reshape(n, m))
# Loop over the batches and grab the latent vectors and image vectors
count = 0
preds = []
images = []
for image_batch, label_batch in data:
count += batch_size
preds.append(loaded_model.predict(image_batch))
images.append(image_batch)
if count >= num_images:
break
X = np.concatenate(preds)
print(f'Activation Vector Shape: {X.shape}')
images = np.concatenate(images)
print(f'Image Vector Shape: {images.shape}')
tsne = TSNE(n_components=2, learning_rate=150, perplexity=30, angle=0.2, verbose=2).fit_transform(X)
tx, ty = tsne[:,0], tsne[:,1]
tx = (tx-np.min(tx)) / (np.max(tx) - np.min(tx))
ty = (ty-np.min(ty)) / (np.max(ty) - np.min(ty))
if bokeh_flag:
p_width = 800
p_height = 600
p = figure(plot_height=p_height, plot_width = p_width,
x_range =[0,p_width], y_range=[0,p_height],
tools='pan,box_zoom,wheel_zoom,reset')
for img, x, y in zip(images, tx, ty):
im_disp = rgb_to_rgba32(img)
n, m = im_disp.shape
p.image_rgba(image=[im_disp],
x=int((p_width-m)*x),
y=int(p_height-n-(p_height-n)*y),
dw=m/5,dh=n/5)
output_file(f"../images/{output}.html", title=label)
bokeh.io.show(p)
else:
width = 4000
height = 3000
max_dim = 100
full_image = Image.new('RGBA', (width, height))
for img, x, y in zip(images, tx, ty):
tile = Image.fromarray(np.uint8(denorm_img(img)*255))
rs = max(1, tile.width/max_dim, tile.height/max_dim)
tile = tile.resize((int(tile.width/rs), int(tile.height/rs)), Image.ANTIALIAS)
full_image.paste(tile, (int((width-max_dim)*x), int((height-max_dim)*y)), mask=tile.convert('RGBA'))
plt.figure(figsize = (16,12))
plt.imshow(full_image)
```
## Run for BigEarthNet
```
file = 'simclr_100_t3_s50_10.h5'
train_files = '../BigEarthData/tfrecords/train-part-*'
generate_tsne_grid(model_path, file, train_files,
num_images=1024, batch_size=32,
ca_flag=False, bokeh_flag=True,
label='BigEarthNet 1024 - SimCLR 10 Epochs',
output='bigearthnet_simclr_e10')
```
## Run for California
```
file = 'ca_simclr_s50_t1_50.h5'
train_files = '../CaliforniaData/tfrecords/train_ca*'
generate_tsne_grid(model_path, file, train_files,
num_images=1024, batch_size=32,
ca_flag=True, bokeh_flag=True,
label='California - SimCLR 50 Epochs',
output='california_simclr_e50_t1')
```
| github_jupyter |
<h1 id="tocheading">Introduction to linear regression</h1>
Chamkor
<div id="toc"></div>
```
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
```
# PART-I "Single predictor" regression
### Introduction
- Regression is to do with "relationship between variables".
- For example, is humidity ($H$) is related with the temperature ($T$) in this room?
- Assume we do experiments, and from the data, we propose a linear hypothesis : $H\propto T$.
- Ofcourse there can be other possible models that can be considered such as approximating by a Fourier series, or by higher degree polynomial and so on.
- Regression also involvolves determining "strength" or "significance" of relationship.
### Steps
- Propose a regression model (assumptions).
- Interpret how "significant" is the regression model. Define a quantitative criteria.
- Check the assumptions, and go to step one.
Thus linear regression analysis also introduces us to framework of hypothesis testing.
### Terminology
- "Predictor" or "independent" variable or "stimulus" $(T)$.
- "Dependent" or "response" variable $(H)$.
- "Positive" and "negative" relationship.
- Regression-line/plane.
- Residual ($H=mT+c\pm\epsilon$). (In general "independent" variables can be more than one).
### Exercise
- ... data sets.
```
import numpy as np
import matplotlib.pyplot as plt
#random dataset
N=100
x=np.linspace(0,1,N)
#print(x)
y=-(np.random.rand(N)+0.5)*x + 2.0
#print(y)
import pandas as pd
data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/forest-fires/forestfires.csv', index_col=0)
print(data)
y=data['temp']
x=data['RH']
N=len(x)
#iris dataset
#data = pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv")
#print(data)
#data.set_index('species')
#setosa=data[data['species'].str.contains("setosa")]
#x=setosa['sepal_length']
#y=setosa['petal_length']
#N=len(x)
import matplotlib.pyplot as plt
def myplot(x,y,style):
plt.plot(x,y,style);
plt.xlabel('predictor $x$');
plt.ylabel('response variable $y$');
myplot(x,y,'o')
```
# Standard steps on the data
### Empirical Mean
$$\bar{x} = \frac{1}{N}\sum_i{x_i}$$
$$\bar{y} = \frac{1}{N}\sum_i{y_i}$$
### Empirical Variance
$$ S^2 = \frac{1}{n-1} \sum_{i=1}^{n}{ \left( y_i - \bar{y} \right)^2 } $$
### Empirical Standard Deviation
$$ S = \sqrt{S^2}$$
Notice that standard deviation is expressed in the same units as $y_i$.
```
#def subPlots(var):
#fig1 = plt.figure()
#ax1 = fig1.add_subplot(221)
#ax1.plot(x)
#ax2 = fig1.add_subplot(222)
#ax2.plot(x)
def scanVariable(var):
#basic
print("Min = ",np.min(var))
print("Max = ",np.max(var))
print("Mean = ",np.mean(var))
print("SD = ",np.std(var))
print("Var = ",np.std(var)**2)
print("Median = ",np.median(var))
print("Length = ",len(var))
#visualize
nbins = int(np.sqrt(len(var)))
fig1 = plt.figure()
ax1 = fig1.add_subplot(131)
ax1.hist(var, bins=nbins, ec='black', color='purple')
ax1.set_xlabel('variable')
ax1.set_ylabel('count')
ax1.set_title("Original")
ax1 = fig1.add_subplot(132)
n=ax1.hist(var/np.std(var), bins=nbins, ec='black', color='purple',normed=True)
ax1.set_xlabel('variable')
#ax1.set_ylabel('count')
ax1.set_title("Normalized/Density")
#print(-np.sum(n[0]*np.log(n[0])))
ax1 = fig1.add_subplot(133)
ax1.hist(var/np.std(var), bins=nbins, ec='black', color='purple',normed=True,cumulative=True)
ax1.set_xlabel('variable')
#ax1.set_ylabel('count')
ax1.set_title("Cumulative")
print("Predictor variable:");
scanVariable(x);
print("Response variable:");
scanVariable(y);
m=1.0
c=2.0
Y1=0.1*x+c
Y2=0.3*x+c
Y3=0.6*x+c
myplot(x,y,'o');
myplot(x,Y1,'-');
#myplot(x,Y2,'-');
#myplot(x,Y3,'-');
x_mean = sum(x)/len(x);
y_mean = sum(y)/len(y);
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
```
### Centering
$x_i = x_i-\bar{x}$
$y_i = y_i-\bar{y}$
```
x=x-x_mean;
y=y-y_mean;
x_mean = sum(x)/len(x);
y_mean = sum(y)/len(y);
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
```
### Normalization/rescaling
This is done to have SD = 1, and mean = 0.
$x_i = \frac{x_i}{SD}$
$y_i = \frac{y_i}{SD}$
```
x=x/np.std(x);
y=y/np.std(y);
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
print(np.std(x))
print(np.std(y))
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
plt.axvline(x_mean);
plt.axhline(y_mean);
```
# Oridinary Least-square
In this method, the sum of squares of the residuals, $\sum_i (Y_i-y_i)^2$, is minimised.
### Spread of the residual
```
d=Y1-y
def myhist(var,xlabel,ylabel):
plt.hist(var)
plt.xlabel(xlabel);
plt.ylabel(ylabel)
myhist(d,'$\epsilon$', 'Count')
```
We would like to identify the **slope** $m$ that minimises
$$ \sum_i (Y_i-y_i)^2 $$
i.e.
$$
\frac{\partial}{\partial m} \sum_i (Y_i-y_i)^2 = 0
$$
$$
\frac{\partial}{\partial m} \sum_i (mx_i-y_i)^2 = 0
$$
$$
\frac{\partial}{\partial m} \sum_i (mx_i-y_i)(mx_i-y_i) = 0
$$
$$
\frac{\partial}{\partial m} \sum_i (m^2 x_i^2 - 2 m x_i y_i + y_i^2) = 0
$$
$$
\sum_i (2 m x_i^2 - 2 x_i y_i) = 0
$$
$$
m=\frac{ \sum_i x_i y_i }
{ \sum_i x_i^2 }
$$
Notice that
$$
\frac{\partial^2}{\partial m^2} \sum_i (Y_i-y_i)^2 = \frac{\partial}{\partial m} \sum_i (2 m x_i^2 - 2 x_i y_i) > 0
$$
therefore a minimum.
```
a = sum( (x-x_mean)*(y-y_mean) )
b = sum( (x-x_mean)**2 )
m = a/b
print(m)
c=0
Y=m*x+c
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
plt.axvline(x_mean);
plt.axhline(y_mean);
myplot(x,Y,'-');
c=y_mean - m*x_mean;
Y=m*x+c
myplot(x,y,'o');
myplot(x_mean,y_mean,'or');
plt.axvline(x_mean);
plt.axhline(y_mean);
myplot(x,Y,'-');
```
# Exercise 1. Ground cricket chirps
* **Ground cricket chirps**
The following data shows the relationship between chirps per second of a striped ground cricket and the corresponding ground temperature (in ºF)
* Determine a linear regression model equation to represent this data.
* Plot the data and the obtained equation.
* Decide on the goodness of fit of the model.
* Extrapolate: If the ground temperature reached 95º, then at what approximate rate would you expect the crickets to be chirping?
* Interpolate: With a listening device, you discovered that on a particular morning the crickets were chirping at a rate of 18 chirps per second. What was the approximate ground temperature that morning?
* If the ground temperature should drop to freezing (32º F), what happens to the cricket's chirping rate?
```
chirps_per_second = [20., 16., 19.8, 18.4, 17.1,15.5 ,14.7 ,15.7,15.4 ,16.3 ,15.0 ,17.2,
16.0 ,17.0 ,14.4 ]
temperature = [88.6,71.6,93.3,84.3,80.6,75.2,69.7,71.6,69.4,83.3,79.6,82.6,80.6,83.5,76.3]
```
# Significance Tests : Assesing the strength of regression model
<a id='Spread of residuals'></a>
### Spread of residuals
* Residual spread (the term $\pm\epsilon$ in the regression model $Y=mX+c\pm\epsilon$) is an indicator.
```
d=Y-y;
myhist(d,'$\epsilon$','Count');
print(np.mean(d))
print(np.std(d))
```
### Residual plot
```
#m, c, r_value, p_value, std_err = stats.linregress(x,y);
prediction=m*x+c;
residual = prediction - y;
plt.plot(x,residual,'r+');
plt.xlabel("predictor x");
plt.ylabel("residual");
#myplot(x,m*x+c,'-r');
scanVariable(d)
```
### $R^2$ (R-squared)
$R^2$ is the ratio of the "sum of squares" of the estimated and the actual responses.
$R^2$ (also called 'coefficient of determination') is a statistical measure of how close the data are to the fitted regression line.
It is the** percentage of the response variable variation that is explained by the linear model**.
If $y_i$ are the actual "outcomes"
and $Y_i$ are the estimated "outcomes"
then
$$
R^2 = \frac{\sum(Y_i-\bar{y})^2} {\sum(y_i-\bar{y})^2} = \frac{\text{Regression sum of squares}}{\text{Total/actual sum of squares}} = \frac{\text{Explained variance}}{\text{Total/actual variance}}
$$
If $R^2\rightarrow 1$, implies strong relationship.
If $R^2\rightarrow 0$, implies weak relationship.
```
num = sum( (Y - y_mean)**2 );
den = sum( (y - y_mean)**2 );
R2 = num/den;
print(R2)
```
### Standard error of estimate (MSE)
$$\text{MSE} = {\frac{1}{N-2}\sum(Y_i-{y_i})^2}$$
where $$\sum_i{(Y_i-{y_i})^2}$$ is the $\textbf{residual sum of squares}$.
```
err = ( np.sum( (Y - y)**2 )/(N-2));
print(err)
```
# Covariance and correlation
$$Covar(x,y) = \frac{1}{n-1}\sum_i{ (x_i-\bar{x})(y_i-\bar{y})}$$
$$Corr(x,y) = \frac{Covar(x,y)}{SD_x SD_y} = \textbf{Correlation coefficient.}$$
where $SD_x$ and $SD_y$ are standard deviations.
- $-1 \leq Corr(x,y) \leq 1$
- $Corr(x,y)$ measures the "strength" of the linear relationship between $x,y$ data, with stronger relationships as $Corr(x,y) \rightarrow$-1 or 1.
- $Corr(x,y) = 0$ implies no linear relationship.
```
covar = sum((x-x_mean)*(y-y_mean))/(N-1);
corr = covar/(np.std(x)*np.std(y));
print(corr)
```
# Regression using scipy stats
```
from scipy import stats
m, c, r_value, p_value, std_err = stats.linregress(x,y);
myplot(x,y,'o');
myplot(x,m*x+c,'-');
print(r_value)
print(std_err)
```
# Regression using scipy statsmodels
```
from statsmodels.formula.api import ols
import pandas as pd
data = pd.DataFrame({'x': x, 'y': y});
model = ols("y ~ x", data).fit();
print(model.summary());
```
# Exercise 2. Sales data
* Obtain the dataset using pandas from the following link:
'http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv'
For the "Advertising.csv" dataset:
* Set variable 'TV' and the corresponding 'sales'.
* Perform a regression 'through the origin'.
* Plot the data and the fitted lines.
* Calculate the Standard Error between the predicted model and the fitted data.
* Obtain $R^2$ value.
* Now, perform the same for the two further columns ('radio', 'newspaper') vs 'sales'.
* Compare the slopes between the three cases.
```
#import pandas as pd
#data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
#rint(data)
#x=data['TV']
#y=data['sales']
#N=len(x)
```
# Brrrrrrrrrrrrreeeeeeeeaaaaaaaaaaaaaaaaaaakkkkkkkkkkkkkkkkkkkkkkkkkkk
# Outliers and Influential Observations
* Once a regression line is computed, a point which lies far from the line (large residual) is known as an **outlier**.
* If a point lies far from the other data in the horizontal direction, it is known as an **influential observation**. It has a considerable impact on slope.
## Theil-Sen regression
```
#outlier dataset
from sklearn import linear_model, datasets
n_samples = 100
n_outliers = 10
x, y, coef = datasets.make_regression(n_samples=n_samples, n_features=1,n_informative=1, noise=10,coef=True, random_state=0)
# Add outlier data
np.random.seed(0)
x[:n_outliers] = 3 + 0.5 * np.random.normal(size=(n_outliers, 1))
y[:n_outliers] = -3 + 10 * np.random.normal(size=n_outliers)
x=[row[0] for row in x]
#y=[row[0] for row in y]
N=len(x);
#print(y)
from numpy import array
x = array( x )
m, c, r_value, p_value, std_err = stats.linregress(x,y);
myplot(x,y,'o');
myplot(x,m*x+c,'-r');
slope = np.empty(len(x))
for j in range(0, len(x)):
for i in range(0, len(x)):
if j != i:
slope[j]= ( (y[j]-y[i])/(x[j]-x[i]) )
m = np.median(slope);
print(m)
myplot(x,y,'o');
myplot(x,m*x+c,'-g');
```
## Residual plot : amplify the presence of outliers
```
#m, c, r_value, p_value, std_err = stats.linregress(x,y);
prediction=m*x+c;
residual = prediction - y;
plt.plot(x,residual,'r+');
plt.xlabel("predictor x");
plt.ylabel("residual");
#myplot(x,m*x+c,'-r');
```
## Outliers using sklearn
```
from sklearn import linear_model, datasets
n_samples = 100
n_outliers = 10
x, y, coef = datasets.make_regression(n_samples=n_samples, n_features=1,n_informative=1, noise=10,coef=True, random_state=0)
# Add outlier data
np.random.seed(0)
x[:n_outliers] = 3 + 0.5 * np.random.normal(size=(n_outliers, 1))
y[:n_outliers] = -3 + 10 * np.random.normal(size=n_outliers)
# Fit line using all data
lr = linear_model.LinearRegression()
lr.fit(x, y)
# Robustly fit linear model with RANSAC algorithm
ransac = linear_model.RANSACRegressor()
ransac.fit(x, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
# Predict data of estimated models
line_x = np.arange(x.min(), x.max())[:, np.newaxis]
line_y = lr.predict(line_x)
line_y_ransac = ransac.predict(line_x)
#plots
plt.scatter(x[inlier_mask], y[inlier_mask])
plt.scatter(x[outlier_mask], y[outlier_mask])
```
## Remark on extrapolation
* Range of validity of the regression model must be made explicit.
# PART-II Multiple Regression : Going back to start
- The humidity can depend on other parameters as well, then
$H_i = m_1 T_i + m_2 X_{2i} + m_3 X_{3i} + ...$
(changing the notation)
$Y_i=m_1 X_1 + m_2 X_{2} + m_3 X_{3} + ...$
or
$Y_i=\sum_{k=1}^p m_k X_{ki}$ for $p$ number of predictors.
or
$\mathbf{Y} = \mathbf{m}\mathbf{X}$ in matrix notation.
- The residual will be
$$\sum_i( Y_i - \sum_{k=1}^p m_k X_{ki} )^2$$
To minimize the residual,
$$
\frac{\partial}{\partial m_k} \sum_i ( Y_i - \sum_{k=1}^p m_k X_{ki} )^2 = 0
$$
For example for two predictos ($p=2$)
$$
\frac{\partial}{\partial m_1} \sum_i( Y_i - m_1 X_{1i} - m_2 X_{2i})^2 = 0 \\
\implies \sum_i ( X_{1i} Y_i - m_1 X_{1i} X_{1i} - m_2 X_{1i} X_{2i}) = 0 \\
\implies m_1 = \frac { \sum_i ( X_{1i} Y_i - m_2 X_{1i} X_{2i}) }{\sum_i X_{1i} X_{1i}}
$$
$$
\frac{\partial}{\partial m_2} \sum_i( Y_i - m_1 X_{1i} - m_2 X_{2i})^2 = 0 \\
\implies \sum_i ( X_{2i} Y_i - m_1 X_{2i} X_{1i} - m_2 X_{2i} X_{2i}) = 0 \\
\implies m_2 = \frac { \sum_i ( X_{2i} Y_i - m_1 X_{2i} X_{1i}) }{\sum_i X_{2i} X_{2i}}
$$
and solve for $m_1$ and $m_2$.
```
import pandas as pd
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
#X = df_adv[['TV', 'Radio']]
#y = df_adv['Sales']
#df_adv.head()
print(data)
x1=data['TV']
x2=data['radio']
x3=data['newspaper']
y=data['sales']
#print(x1,y)
plt.plot(x1,y,'o');
plt.xlabel('TV')
plt.ylabel('sales')
myplot(x2,y,'o');
plt.xlabel('radio')
plt.ylabel('sales')
plt.plot(x3,y,'o')
plt.xlabel('newspaper');
plt.ylabel('sales');
```
## Proposed regression model
$sales = m_0 + m_1 \times TV + m_2 \times radio + m_3 \times newspaper$
```
import statsmodels.api as sm
X = data[['TV', 'radio']];
y = data['sales'];
## fit a OLS model with intercept on TV and Radio
X = sm.add_constant(X);
model = sm.OLS(y, X).fit();#
print('m=',model.params)
## Create the 3d plot
# TV/Radio grid for 3d plot
xx1, xx2 = np.meshgrid(np.linspace(X.TV.min(), X.TV.max(), 100),
np.linspace(X.radio.min(), X.radio.max(), 100));
# plot the hyperplane by evaluating the parameters on the grid
Z = model.params[0] + model.params[1] * xx1 + model.params[2] * xx2;
# create matplotlib 3d axes
fig = plt.figure(figsize=(12, 8));
#ax = Axes3D(fig, azim=-115, elev=15)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure();
ax = fig.add_subplot(111, projection='3d');
# plot hyperplane
surf = ax.plot_surface(xx1, xx2, Z, cmap=plt.cm.Blues_r, alpha=0.6, linewidth=0);
# plot data points - points over the HP are white, points below are black
resid = y - model.predict(X);
ax.scatter(X[resid >= 0].TV, X[resid >= 0].radio, y[resid >= 0], color='black', alpha=1.0, facecolor='white');
ax.scatter(X[resid < 0].TV, X[resid < 0].radio, y[resid < 0], color='black', alpha=1.0);
# set axis labels
ax.set_xlabel('TV');
ax.set_ylabel('radio');
ax.set_zlabel('sales');
```
## See model summary
```
print(model.summary());
```
## Exercise (Multiple regression)
* **Iris dataset **
- Load 'iris' dataset from seaborn library.
- You will notice that there are defferent species with their sepal and petal dimensions. For 'setosa' species, create and store the data in a seperate dataframe.
- Now run a multiple regression model for 'sepal_length' as a output variable and 'sepal_width' and 'petal_width' as predictors.
- Print the intercept and the slope of the predicted model.
- Calculate the $R^2$ the model.
- Interpret if there is a correlation.
```
#iris dataset for multiple regression
#data = pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv")
#print(data)
#data.set_index('species')
#setosa=data[data['species'].str.contains("setosa")]
#x1=setosa['sepal_length']
#x2=setosa['petal_length']
#x3=setosa['petal_length']
#y = data['sepal_length'];
#X = data[['sepal_width', 'petal_width']];
```
## Appendix
| Element | Description|
|----|----|
| Dep. Variable | Which variable is the response in the model|
|Model | What model you are using in the fit|
|Method | How the parameters of the model were calculated
|No. Observations |The number of observations (examples)
|DF Residuals | Degrees of freedom of the residuals. Number of observations – number of parameters
|DF Model | Number of parameters in the model (not including the constant term if present)
The right part of the first table shows the goodness of fit
|Element | Description|
|----|----|
|R-squared |The coefficient of determination. A statistical measure of how well the regression line approximates the real data points|
|Adj. R-squared | The above value adjusted based on the number of observations and the degrees-of-freedom of the residuals|
|F-statistic | A measure how significant the fit is. The mean squared error of the model divided by the mean squared error of the residuals|
|Prob (F-statistic) | The probability that you would get the above statistic, given the null hypothesis that they are unrelated|
|Log-likelihood | The log of the likelihood function.|
|AIC |The Akaike Information Criterion. Adjusts the log-likelihood based on the number of observations and the complexity of the model.
|BIC | The Bayesian Information Criterion. Similar to the AIC, but has a higher penalty for models with more parameters.|
The left part of the table reports for each of the coefficients
|Description| Name of the term in the model|
|-----------|---------------------|
|coef | The estimated value of the coefficient|
|std err | The basic standard error of the estimate of the coefficient. More sophisticated errors are also available.|
|t |The t-statistic value. This is a measure of how statistically significant the coefficient is.|
|P > t | P-value that the null-hypothesis that the coefficient = 0 is true. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response.|
|Element |Description|
|-----------|---------------------|
|Skewness | A measure of the symmetry of the data about the mean. Normally-distributed errors should be symmetrically distributed about the mean (equal amounts above and below the line).|
|Kurtosis | A measure of the shape of the distribution. Compares the amount of data close to the mean with those far away from the mean (in the tails).|
|Omnibus | D’Angostino’s test. It provides a combined statistical test for the presence of skewness and kurtosis.|
|Prob(Omnibus) | The above statistic turned into a probability|
|Jarque-Bera | A different test of the skewness and kurtosis|
|Prob (JB) |The above statistic turned into a probability|
|Durbin-Watson | A test for the presence of autocorrelation (that the errors are not independent.) Often important in time-series analysis|
|Cond. No | A test for multicollinearity (if in a fit with multiple parameters, the parameters are related with each other).|
| github_jupyter |
# 2 - Autoencoder
```
import pandas as pd
import numpy as np
import glob
import matplotlib.pyplot as plt
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.applications.vgg16 import preprocess_input
from livelossplot import PlotLossesKeras
from tensorflow.keras.preprocessing import image
from tensorflow.keras.callbacks import ModelCheckpoint
```
### Load VGG16 embeddings
```
im_vecs = pd.read_pickle('../data/image_vecs_vgg16.pkl')
im_vecs
```
### Train basic autoencoder
```
im_vec_len = 4096
encoding_dim = 256
im_input=Input(shape=(im_vec_len,), name="Image-Input")
x = GaussianNoise(stddev=0.1)(im_input)
x = Dense(1056, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(528, activation='relu')(x)
encoded = Dense(encoding_dim, activation='relu')(x)
y = Dense(528, activation='relu')(encoded)
y = Dropout(0.2)(x)
y = Dense(1056, activation='relu')(y)
decoded = Dense(im_vec_len, activation='linear')(y)
autoencoder = Model(im_input, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
best = ModelCheckpoint('../models/ae.h5',
monitor='val_loss',
verbose=0,
save_best_only=True,
mode='auto')
autoencoder.fit(np.vstack(im_vecs.image_features), np.vstack(im_vecs.image_features),
epochs=100,
batch_size=32,
shuffle=True,
validation_split=0.2,
callbacks=[PlotLossesKeras(),best])
```
### Encode image features
```
encoder = Model(im_input, encoded)
encoded_ims = encoder.predict(np.vstack(im_vecs.image_features))
im_vecs['image_features_encoded'] = pd.Series(encoded_ims.tolist()).to_frame()
im_vecs.to_pickle('../data/image_vecs_encoded.pkl')
encoder.save('../models/encoder.h5')
```
## Convolutional Autoencoder - experimentation
```
filenames = glob.glob('../data/metadata/images/*jpg')
image_size = (256,256)
im_data = []
for i, filename in enumerate(filenames):
try:
img = image.load_img(filename, target_size=image_size)
x = image.img_to_array(img)/255.
im_data.append(x)
except:
continue
im_data = np.array(im_data)
input_img = Input(shape=(256, 256, 3)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.summary()
best = ModelCheckpoint('../models/cae.h5',
monitor='val_loss',
verbose=0,
save_best_only=True,
mode='auto',
period=1)
autoencoder.fit(im_data, im_data,
epochs=50,
batch_size=256,
shuffle=True,
validation_split=0.2,
callbacks=[best, PlotLossesKeras()])
x_test = im_data
decoded_imgs = autoencoder.predict(x_test)
decoded_imgs.shape
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(256, 256, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i+n+1)
plt.imshow(decoded_imgs[i].reshape(256, 256, 3))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
| github_jupyter |
# Stroke Key-word extraction
```
import os
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
raw_dir = '../../data/raw'
readings = pd.read_csv(os.path.join(raw_dir, 'brain_mr_ct_result.csv'), header=None)
df = pd.read_csv(os.path.join(raw_dir, "lab.csv"), parse_dates=['event_time'])
df.head()
cutoff_start = pd.datetime(year=2100, month=1, day=1, hour=0, minute=0,)
cutoff_end = pd.datetime(year=2100, month=4, day=1, hour=0, minute=0,)
is_postop = (df.event_time > cutoff_start) & (df.event_time <= cutoff_end)
is_img = df_postop.event_type == 'IMG'
is_stroke_related = df.event_name.apply(lambda x: 'CT' in x or 'MR' in x)
df_valid = df[is_postop & is_img & is_stroke_related]
del df
df_valid.head()
df_valid['event_result'] = df_valid['event_result'].apply(lambda x: x.lower())
```
# Parse results
```
s = """ischemi
infarc
high-intensity lesions on DWI-low intensity lesion on ADC
diffusion restriction lesion
stroke
T2 high signal intensity
low attenuation-GRE
hemorrhagic transformation
ICH
hemorrhage"""
s = s.lower().split('\n')
s
negating_words = ['no', 'old', 'known', 'previous']
keywords = s
def contains_which_keyword(doc):
has = []
for kw in keywords:
if kw in doc:
if kw == 'stroke':
if 'brain mri acute stroke' not in doc:
has.append(kw)
else:
has.append(kw)
if len(has) == 0:
return None
else:
return has
def contains_which_keyword_not_negated(doc):
doc = doc.split('\n')
for i, line in enumerate(doc):
for kw in keywords:
if kw in line:
has_any_negated_kw = False
for nw in negating_words:
match = re.search(
r'(?<={negation} ).*{keyword}'.format(negation=nw, keyword=kw),
line
)
has_any_negated_kw |= match is not None
if not has_any_negated_kw and 'brain mri acute stroke' not in line:
return True
return False
def contains_keyword_at_line_number(kw):
def func(doc):
doc = doc.split('\n')
kw = keywords[0]
for i, line in enumerate(doc):
if kw in line:
out_start = re.search(r'(?<!^no) .*{}'.format(kw), line)
not_negated_at_start = out_start is not None
out_middle = re.search(r'(?<! no) .*{}'.format(kw), line)
not_negated_mid_sentence = out_middle is not None
if not_negated_at_start and not_negated_mid_sentence:
return True, i
return False, None
return func
nw='no'
kw='infarc'
print(re.search(r'(?<={negation} ).*{keyword}'.format(negation=nw, keyword=kw), 'no evidence of large territorial infarction or hemorrhage').group())
# print(re.search(r'(?<='' ).*{keyword}'.format(negation=nw, keyword=kw), 'no evidence of large territorial infarction or hemorrhage').group())
line = 'no evidence of large territorial infarction or hemorrhage'
for kw in keywords:
has_any_negated_kw = False
for nw in negating_words:
match = re.search(
r'(?<={negation} ).*{keyword}'.format(negation=nw, keyword=kw),
line
)
has_any_negated_kw |= match is not None
has_any_negated_kw
print(df_valid.event_result.iloc[0])
contains_which_keyword_not_negated(df_valid.event_result.iloc[0])
del df
df_valid['contains_which_kw'] = df_valid['event_result'].apply(contains_which_keyword)
df_valid['contains_kw_not_negated'] = df_valid['event_result'].apply(contains_which_keyword_not_negated)
df_valid[df_valid.case_id == 1515]
df_valid['contains_kw'] = df_valid['contains_which_kw'].apply(lambda x: x is not None)
df_valid.event_time.max()
df_valid.to_csv("../../data/interim/filtered_readings.csv", index=False)
df_valid['contains_which_kw'].apply(lambda x: x is not None).sum()
df_valid['contains_kw'].sum(), len(df_valid[df_valid['contains_kw']].case_id.unique())
df_valid['contains_kw_not_negated'].sum()
len(df_valid[df_valid['contains_kw_not_negated']].case_id.unique())
d = df_valid[['case_id', 'contains_kw_not_negated']].groupby('case_id').sum().reset_index().rename(columns={'case_id': 'CaseID'})
clinical = pd.read_csv("../data/raw/clinical.csv")
clinical.columns
clinical.loc[:, 'CaseID'] = clinical['CaseID'].astype(int)
clinical[['CaseID', 'Department']]
clinical.Department
case_ids = clinical.CaseID.dropna()
case_ids = case_ids.astype(int)
labels = case_ids.to_frame()
labels = labels.merge(d, how='outer', on='CaseID').rename(columns={'contains_kw_not_negated': 'label'})
labels = labels.fillna(0)
labels.label = labels.label > 0
labels.label = labels.label.astype(int)
labels.head()
labels.to_csv("../../data/processed/labels.csv", index=False)
clinical.merge(d, how='outer', on='CaseID').head(20)
```
오름차순으로
case_id, 검사일,
## 느슨한 기준
* 키워드가 하나라도 들어가있는 느슨한 기준으로 1인 사람들을 위로올린다.
* 그런데 거기 해당하는 case id는 다 보자. 0이어도. (위로 올린다.)
* 그 묶음 내에서 case id와 검사일자순으로 ascending sort
## 뒷부분: 나머지사람들
* case id 검사일로 소팅
```
df_valid = df_valid.sort_values(['case_id', 'event_time'])
head = df_valid[df_valid.contains_kw]
tail = df_valid[~df_valid.contains_kw]
head_case_ids = head.case_id.unique()
bring_to_head = tail.case_id.apply(lambda x: x in head_case_ids)
len(head.case_id.unique()), len(pd.concat([head, tail[bring_to_head]], axis=0).case_id.unique())
head = pd.concat([head, tail[bring_to_head]]).sort_values(['case_id', 'event_time'])
tail = tail[~bring_to_head]
tail.shape
head.shape
df_valid.shape
tail.shape[0] + head.shape[0]
pd.concat([head, tail]).to_csv("../data/interim/mri_readings_sorted.csv", index=False)
head.columns
sorted_df = pd.read_csv("../data/interim/mri_readings_sorted.csv")
sorted_df.head()
clinical = clinical.rename(columns={'CaseID': 'case_id'})
sorted_df.merge(clinical[['case_id', 'Department']], how='inner', on='case_id').to_csv("../data/interim/mri_readings_sorted_with_dept.csv", index=False)
```
# 코드 정리
```
raw_dir = '../data/raw'
keywords = """ischemi
infarc
high-intensity lesions on DWI-low intensity lesion on ADC
diffusion restriction lesion
stroke
T2 high signal intensity
low attenuation-GRE
hemorrhagic transformation
ICH
hemorrhage""".lower().split('\n')
negating_words = ['no', 'old', 'known', 'previous']
df = pd.read_csv(os.path.join(raw_dir, "lab.csv"), parse_dates=['event_time'], nrows=100)
df_valid.event_result.iloc[:30].tolist()
df = pd.read_csv(os.path.join(raw_dir, "lab.csv"), parse_dates=['event_time'])
# Time range
cutoff_start = pd.datetime(year=2100, month=1, day=1, hour=0, minute=0,)
cutoff_end = pd.datetime(year=2100, month=4, day=1, hour=0, minute=0,)
# Conditions
is_postop = (df.event_time > cutoff_start) & (df.event_time <= cutoff_end)
is_img = df.event_type == 'IMG'
is_stroke_related = df.event_name.apply(lambda x: 'CT' in x or 'MR' in x)
# Filter data
df_valid = df[is_postop & is_img & is_stroke_related]
# Delete original df (too big)
del df
df_valid.loc[:, 'event_result'] = df_valid['event_result'].apply(lambda x: x.lower())
def contains_which_keyword(doc):
has = []
for kw in keywords:
if kw in doc:
if kw == 'stroke':
if 'brain mri acute stroke' not in doc:
has.append(kw)
else:
has.append(kw)
if len(has) == 0:
return None
else:
return has
def contains_which_keyword_not_negated(doc):
doc = doc.split('\n')
for i, line in enumerate(doc):
for kw in keywords:
if kw in line:
has_any_negated_kw = False
for nw in negating_words:
match = re.search(
r'(?<={negation} ).*{keyword}'.format(negation=nw, keyword=kw),
line
)
has_any_negated_kw |= match is not None
if not has_any_negated_kw and 'brain mri acute stroke' not in line:
return True
return False
df_valid.loc[:, 'contains_which_kw'] = df_valid['event_result'].apply(contains_which_keyword)
df_valid.loc[:, 'contains_kw_not_negated'] = df_valid['event_result'].apply(contains_which_keyword_not_negated)
df_valid['contains_kw'] = df_valid['contains_which_kw'].apply(lambda x: x is not None)
```
## Save readings with filters
```
df_valid.to_csv("../../data/interim/filtered_readings.csv", index=False)
```
## Convert to binary label
Using string
```
labels = df_valid[['case_id', 'contains_kw_not_negated']].groupby('case_id').sum().reset_index()
labels = labels.rename({'case_id': 'CaseID', 'contains_kw_not_negated': 'label'})
labels = (labels['label'] > 0).astype(int)
```
| github_jupyter |
# 디리클레 분포
디리클레 분포(Dirichlet distribution)는 베타 분포의 확장판이라고 할 수 있다. 베타 분포는 0과 1사이의 값을 가지는 단일(univariate) 확률 변수의 베이지안 모형에 사용되고 디리클레 분포는 0과 1사이의 사이의 값을 가지는 다변수(multivariate) 확률 변수의 베이지안 모형에 사용된다. 다만 디리클레 분포틑 다변수 확률 변수들의 합이 1이되어야 한다는 제한 조건을 가진다.
즉 $K=3$인 디리클레 분포를 따르는 확률 변수는 다음과 같은 값들을 샘플로 가질 수 있다.
$$(1, 0, 0)$$
$$(0.5, 0.5, 0)$$
$$(0.2, 0.3, 0.5)$$
디리클레 분포의 확률 밀도 함수는 다음과 같다.
$$ f(x_1, x_2, \cdots, x_K) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} $$
여기에서
$$ \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} $$
이고 다음과 같은 제한 조건이 있다.
$$ \sum_{i=1}^{K} x_i = 1 $$
이 식에서 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 디리클레 분포의 모수 벡터이다.
## 베타 분포와 디리클레 분포의 관계
베타 분포는 $K=2$ 인 디리클레 분포라고 볼 수 있다.
즉 $x_1 = x$, $x_2 = 1 - x$, $\alpha_1 = a$, $\alpha_2 = b$ 로 하면
$$
\begin{eqnarray}
\text{Beta}(x;a,b)
&=& \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \\
&=& \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\, x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \\
&=& \frac{1}{\mathrm{B}(\alpha_1, \alpha_2)} \prod_{i=1}^2 x_i^{\alpha_i - 1}
\end{eqnarray}
$$
## 디리클레 분포의 모멘트 특성
디리클레 분포의 기댓값, 모드, 분산은 다음과 같다.
* 기댓값
$$E[x_k] = \dfrac{\alpha_k}{\alpha}$$
여기에서
$$\alpha=\sum\alpha_k$$
* 모드
$$ \dfrac{\alpha_k - 1}{\alpha - K}$$
* 분산
$$\text{Var}[x_k] =\dfrac{\alpha_k(\alpha - \alpha_k)}{\alpha^2(\alpha + 1)}$$
기댓값 공식을 보면 모수인 $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$는 $(x_1, x_2, \ldots, x_K$ 중 어느 수가 더 크게 나올 가능성이 높은지를 결정하는 형상 인자(shape factor)임을 알 수 있다. 모든 $\alpha_i$값이 동일하면 모든 $x_i$의 분포가 같아진다.
또한 분산 공식을 보면 $\boldsymbol\alpha$의 절대값이 클수록 분산이 작아진다. 즉, 어떤 특정한 값이 나올 가능성이 높아진다.
## 디리클레 분포의 응용
다음과 같은 문제를 보자 이 문제는 $K=3$이고 $ \alpha_1 = \alpha_2 = \alpha_3$ 인 Dirichlet 분포의 특수한 경우이다.
<img src="https://datascienceschool.net/upfiles/d0acaf490aaa41389b975e20c58ac1ee.png" style="width:90%; margin: 0 auto 0 auto;">
3차원 디리클레 문제는 다음 그림과 같이 3차원 공간 상에서 (1,0,0), (0,1,0), (0,0,1) 세 점을 연결하는 정삼각형 면위의 점을 생성하는 문제라고 볼 수 있다.
```
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
fig = plt.figure()
ax = Axes3D(fig)
x = [1,0,0]
y = [0,1,0]
z = [0,0,1]
verts = [zip(x, y,z)]
ax.add_collection3d(Poly3DCollection(verts, edgecolor="k", lw=5, alpha=0.4))
ax.text(1, 0, 0, "(1,0,0)", position=(0.7,0.1))
ax.text(0, 1, 0, "(0,1,0)", position=(0,1.04))
ax.text(0, 0, 1, "(0,0,1)", position=(-0.2,0))
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.view_init(30, -20)
tmp_planes = ax.zaxis._PLANES
# set origin ( http://stackoverflow.com/questions/15042129/changing-position-of-vertical-z-axis-of-3d-plot-matplotlib )
ax.yaxis._PLANES = (tmp_planes[2], tmp_planes[3],
tmp_planes[0], tmp_planes[1],
tmp_planes[4], tmp_planes[5])
ax.zaxis._PLANES = (tmp_planes[2], tmp_planes[3],
tmp_planes[0], tmp_planes[1],
tmp_planes[4], tmp_planes[5])
plt.show()
```
다음 함수는 생성된 점들을 2차원 삼각형 위에서 볼 수 있도록 그려주는 함수이다.
```
def plot_triangle(X, kind):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
X1 = (X-n12).dot(m1)
X2 = (X-n12).dot(m2)
g = sns.jointplot(X1, X2, kind=kind, xlim=(-0.8,0.8), ylim=(-0.45,0.9))
g.ax_joint.axis("equal")
plt.show()
```
만약 이 문제를 단순하게 생각하여 서로 독립인 0과 1사이의 유니폼 확률 변수를 3개 생성하고 이들의 합이 1이 되도록 크기를 정규화(normalize)하면 다음 그림과 같이 삼각형의 중앙 근처에 많은 확률 분포가 집중된다. 즉, 확률 변수가 골고루 분포되지 않는다.
```
X1 = np.random.rand(1000, 3)
X1 = X1/X1.sum(axis=1)[:, np.newaxis]
plot_triangle(X1, kind="scatter")
plot_triangle(X1, kind="hex")
```
그러나 $\alpha=(1,1,1)$인 디리클레 분포는 다음과 같이 골고루 샘플을 생성한다.
```
X2 = sp.stats.dirichlet((1,1,1)).rvs(1000)
plot_triangle(X2, kind="scatter")
plot_triangle(X2, kind="hex")
```
$\alpha$가 $(1,1,1)$이 아닌 경우에는 다음과 같이 특정 위치에 분포가 집중되도록 할 수 있다. 이 특성을 이용하면 다항 분포의 모수를 추정하는 베이지안 추정 문제에 응용할 수 있다.
```
def project(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]
def project_reverse(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12
eps = np.finfo(float).eps * 10
X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])
import matplotlib.tri as mtri
triang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])
refiner = mtri.UniformTriRefiner(triang)
triang2 = refiner.refine_triangulation(subdiv=6)
XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])
pdf = sp.stats.dirichlet((1,1,1)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((3,4,2)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((16,24,14)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
```
| github_jupyter |
# Facies classification using Machine Learning
#### Bird Team: PG+AC
```
%matplotlib inline
import pandas as pd
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier, VotingClassifier
from sklearn.multiclass import OneVsOneClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold, cross_val_score, cross_val_predict, LeaveOneGroupOut, LeavePGroupsOut
from sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score
from sklearn.svm import LinearSVC
from sklearn.feature_selection import SelectFromModel, RFECV
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.base import clone
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from stacking_classifiers import *
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
pd.options.mode.chained_assignment = None
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
print(set(training_data["Well Name"]))
training_data.head()
well_data = pd.read_csv('./../validation_data_nofacies.csv')
print(set(well_data["Well Name"]))
print(well_data.shape)
well_data.head()
# concat train and test for processing
well_data["origin"] = 'test'
training_data["origin"] = 'train'
df = pd.concat([well_data,training_data],axis=0,ignore_index=True)[list(training_data.columns)]
df['Well Name'] = df['Well Name'].astype('category')
df.head(10)
# add some features based on the well data.
# nb points : can be correlated with how soft soil is ?
print("session")
sessionsize = df.groupby(["Well Name",'Formation']).size().reset_index()
sessionsize.columns = ["Well Name",'Formation','formation_size']
df = pd.merge(df,sessionsize,how='left',on = ["Well Name",'Formation'])
# depth :
print("depth")
sessionsize = df.groupby(["Well Name",'Formation'])["Depth"].min().reset_index()
sessionsize.columns = ["Well Name",'Formation','minimum_depth']
df = pd.merge(df,sessionsize,how='left',on = ["Well Name",'Formation'])
sessionsize = df.groupby(["Well Name",'Formation'])["Depth"].max().reset_index()
sessionsize.columns = ["Well Name",'Formation','maximum_depth']
df = pd.merge(df,sessionsize,how='left',on = ["Well Name",'Formation'])
df['formation_depth'] = df["maximum_depth"] - df["minimum_depth"]
df["soft_indic"] = df['formation_depth'] / df["formation_size"]
# add avgs of feat
print("add avgs of feat")
list_to_avg = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
for val in list_to_avg :
df[val + "_min"] = df.groupby(["Well Name",'Formation'])[val].transform(np.min)
df[val + "_max"] = df.groupby(["Well Name",'Formation'])[val].transform(np.max)
df[val + "_mean"] = df.groupby(["Well Name",'Formation'])[val].transform(np.mean)
df[val + "_var"] = df.groupby(["Well Name",'Formation'])[val].transform(np.var)
# add distances feat. = an attempt at regulariation.
print("add distances feat.")
for val in list_to_avg :
df[val + "_min_dist"] = df[val] -df[val + "_min"]
df[val + "_max_dist"] = df[val] -df[val + "_max"]
df[val + "_mean_dist"] = df[val] -df[val + "_mean"]
# add lag and lead !
print("lag lead")
list_to_lag = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
for val in list_to_lag:
for lag in range(1,11):
df[val+'_lag_'+str(lag)]=df[val]-df.groupby("Well Name")[val].shift(periods=lag)
df[val+'_lead_'+str(lag)]=df[val]-df.groupby("Well Name")[val].shift(periods=-lag)
# adding some Formation lag and lead.
for lag in range(1,3):
df['Formation'+'_lag_'+str(lag)]=df.groupby("Well Name")['Formation'].shift(periods=lag)
df['Formation'+'_lead_'+str(lag)]=df.groupby("Well Name")['Formation'].shift(periods=-lag)
df['Formation'+'_lag_'+str(lag) + 'equal'] = (df['Formation'+'_lag_'+str(lag)] == df["Formation"]).astype(int)
df['Formation'+'_lead_'+str(lag) + 'equal'] = (df['Formation'+'_lead_'+str(lag)] == df["Formation"]).astype(int)
print("rolling")
#Add rolling features
list_to_roll = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M','RELPOS']
window_size = [5,10,15,20,50]
for w in window_size:
for val in list_to_roll:
df[val+'_rollingmean_'+str(w)]=df.groupby("Well Name")[val].apply(
lambda x:x.rolling(window=w,center=True).mean())
df[val+'_rollingmax_'+str(w)]=df.groupby("Well Name")[val].apply(
lambda x:x.rolling(window=w,center=True).max())
df[val+'_rollingmin_'+str(w)]=df.groupby("Well Name")[val].apply(
lambda x:x.rolling(window=w,center=True).min())
df[val+'_rollingstd_'+str(w)]=df.groupby("Well Name")[val].apply(
lambda x:x.rolling(window=w,center=True).std())
print("special window features for NM_M")
def NM_M_distance(x,how,target):
length = len(x)
rank = np.empty(length)
count = -1
NM_M = x["NM_M"].values
if how=="up":
order = range(length)
elif how=="down":
order = range(length-1,-1,-1)
for i in order:
if ((NM_M[i] != target) & (count>-1)):
count+=1
rank[i] += count
elif NM_M[i] == target:
count=0
else:
rank[i] = count
rank = pd.DataFrame(rank.astype(int), columns=["NM_M_Rank_Target_+"+str(target)+"_"+how], index = x.index)
return(rank)
df["NM_M_Rank_Target_1_up"]=df.groupby(["Well Name"]).apply(NM_M_distance,how="up",target=1)
df["NM_M_Rank_Target_2_up"]=df.groupby(["Well Name"]).apply(NM_M_distance,how="up",target=2)
df["NM_M_Rank_Target_1_down"]=df.groupby(["Well Name"]).apply(NM_M_distance,how="down",target=1)
df["NM_M_Rank_Target_2_down"]=df.groupby(["Well Name"]).apply(NM_M_distance,how="down",target=2)
print("filling na")
df = df.groupby(["Well Name"], as_index=False).apply(lambda group: group.bfill())
df = df.groupby(["Well Name"], as_index=False).apply(lambda group: group.ffill())
df = df.fillna(df.mean())
print("Vectorizing Formation text data")
from sklearn.feature_extraction.text import CountVectorizer
list_formation = ['Formation',
'Formation_lag_1',
'Formation_lead_1',
'Formation_lag_2',
'Formation_lead_2']
for l in list_formation:
cv = CountVectorizer()
counts = cv.fit_transform(df[l].values)
cols = [c+"_"+l for c in cv.get_feature_names()]
counts = pd.DataFrame(counts.toarray(),columns = cols)
df = df.drop(l,axis = 1)
df = pd.concat([df,counts],axis=1)
print("Finished preparing data. Now ready for ML ignition!")
```
## Fitting
```
# this time let's use all the training set
groups = df[(df['origin']=='train')]["Well Name"]
ytrain = df[(df['origin']=='train')]['Facies']
yvalid = df[(df['origin']=='test')]['Facies']
xtrain = df[(df['origin']=='train')].drop(['Well Name','origin','Facies'],axis=1)
xvalid = df[(df['origin']=='test')].drop(['Well Name','origin','Facies'],axis=1)
custom_cv = LeavePGroupsOut(n_groups=2)
set(yvalid.values)
clf_rfe = RandomForestClassifier(
n_estimators=100,
criterion="entropy",
class_weight='balanced',
min_samples_leaf=5,
min_samples_split=25,
)
custom_cv_1 = custom_cv.split(xtrain, ytrain, groups)
fs = RFECV(clf_rfe,cv=custom_cv_1,scoring="f1_micro",step=0.1,verbose=2,n_jobs=4)
fs.fit(xtrain, ytrain)
support = fs.support_
feature = pd.Series(xtrain.columns.values)
selected_features = list(feature[support])
print(len(selected_features))
xtrain_fs = xtrain[selected_features].copy()
xvalid_fs = xvalid[selected_features].copy()
rf = RandomForestClassifier(
n_estimators=100,
criterion="entropy",
class_weight='balanced',
min_samples_leaf=5,
min_samples_split=25,
max_features=10,
random_state=42
)
xtc = ExtraTreesClassifier(
n_estimators=100,
criterion="entropy",
class_weight='balanced',
min_samples_leaf=5,
min_samples_split=25,
max_features=10,
random_state=42
)
gbt = GradientBoostingClassifier(
loss='deviance',
n_estimators = 100,
learning_rate = 0.1,
max_depth = 3,
max_features = 10,
min_samples_leaf = 5,
min_samples_split = 25,
random_state = 42,
max_leaf_nodes = None
)
xgb = XGBClassifier(
learning_rate = 0.1,
max_depth = 3,
min_child_weight = 10,
n_estimators = 150,
colsample_bytree = 0.9,
seed = 42
)
custom_cv_2 = list(LeavePGroupsOut(n_groups=2).split(xtrain, ytrain, groups))
stacked = StackedClassifier(clfs = [rf, xtc, gbt, xgb],
level2_learner= LogisticRegression(),
skf = custom_cv_2
)
stacked.fit(xtrain_fs.values, ytrain.values)
```
## Apply to test
```
well_name_valid = df.loc[(df['origin']=='test'),"Well Name"]
preds = stacked.predict_proba(xvalid_fs)
classes = list(set(ytrain))
preds_hard = [classes[i] for i in np.argmax(preds, axis=1)]
well = "CRAWFORD"
depth = xvalid.loc[well_name_valid== well ,"Depth"]
predictions = pd.Series(preds_hard).loc[well_name_valid==well]
plt.plot(depth,predictions)
plt.axis([2950,3175, 1, 9])
plt.grid(b=True, which='major', color='r', linestyle='--')
plt.show()
well = "STUART"
depth = xvalid.loc[well_name_valid== well ,"Depth"]
predictions = pd.Series(preds_hard).loc[well_name_valid==well]
plt.plot(depth,predictions)
plt.axis([2800,3050, 1, 9])
plt.grid(b=True, which='major', color='r', linestyle='--')
plt.show()
xvalid['Facies']=preds_hard
xvalid.to_csv('XmasPreds_6.csv')
```
| github_jupyter |
```
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
from WiSARD import WiSARD
import numpy as np
import pandas as pd
import math
import sys
import random
import matplotlib.pyplot as plt
import itertools
from Utils import thermometer,one_hot
pd.options.mode.chained_assignment = None # default='warn'
import seaborn as sns
color = sns.color_palette()
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import resample
from sklearn.metrics import log_loss
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import TruncatedSVD
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedKFold
qtd_splits = 10
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn import svm
from time import time
from collections import defaultdict
import itertools
#import xgboost as xgb
model = KeyedVectors.load_word2vec_format('../glove.6B.50d.word2vec.txt')
df = {
"cooking": pd.read_csv('../dataset/processed/cooking.csv'),
"crypto": pd.read_csv('../dataset/processed/crypto.csv'),
"robotics": pd.read_csv('../dataset/processed/robotics.csv'),
"biology": pd.read_csv('../dataset/processed/biology.csv'),
"travel": pd.read_csv('../dataset/processed/travel.csv'),
"diy": pd.read_csv('../dataset/processed/diy.csv'),
#"physics": pd.read_csv('physics.csv'),
}
X = []
y = []
X_thermometer = []
X_onehot = []
for file in df:
for i in range(df[file].shape[0]):
doc = ''
if type(df[file].iloc[i]['title']) is str:
doc += df[file].iloc[i]['title'] + ' '
if type(df[file].iloc[i]['content']) is str:
doc += df[file].iloc[i]['content'] + ' '
v = np.array([0] * 50)
w = doc.split(' ')
for j in w:
if j in model:
v = np.add(v, model[j])
X.append(v)
v = v / np.linalg.norm(v) # normalized
X_thermometer.append(thermometer(v, n=50))
X_onehot.append(one_hot(v, n=50))
y.append(file)
#f = pd.DataFrame(X)
#f.to_csv("../dataset/word2vec/" + file + ".csv", index=False)
l_enc = LabelEncoder()
y_enc = l_enc.fit_transform(y)
print('Encoded labels: ', list([(i, l_enc.classes_[i]) for i in range(0, len(l_enc.classes_))]))
```
## Functions
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize=(12,6))
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.title(title + " normalized confusion matrix")
else:
plt.title(title + ' confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
#print(cm)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.show()
def benchmark(clf, X_train, y_train, X_test, y_test):
print("Training: ")
print(clf)
t0 = time()
clf.fit(X_train, y_train)
train_time = time() - t0
print("train time: %0.3fs" % train_time)
t0 = time()
pred = clf.predict(X_test)
test_time = time() - t0
print("test time: %0.3fs" % test_time)
score = accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
cm = confusion_matrix(y_test, pred)
clf_descr = str(clf).split('(')[0]
print("Done with " + clf_descr)
print('_' * 80)
return clf_descr, score, train_time, test_time, cm
def show_results(cf, name):
print(name)
print(cf)
print('\n')
print('Mean accuracy: ' + str((np.array(cf['score'])).mean()) + ' +/- ' + str((np.array(cf['score'])).std()))
print('Mean train time: ' + str((np.array(cf['train_time'])).mean()) + ' +/- ' + str((np.array(cf['train_time'])).std()))
print('Mean test time: ' + str((np.array(cf['test_time'])).mean()) + ' +/- ' + str((np.array(cf['test_time'])).std()))
print('Mean confusion matrix:')
t = [np.matrix(x) for x in cf['confusion_matrix']]
su = np.matrix(t[0])
for i in range(1,len(t)):
su += t[i]
plot_confusion_matrix(np.squeeze(np.asarray(su)), l_enc.classes_, title=name, normalize=True)
print('_' * 80)
```
## Kfold
```
X_train = np.array(X)
y_train = np.array(y_enc)
kf = StratifiedKFold(n_splits=10)
#results = defaultdict(list)
resultsGaussianNB = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
resultsBernoulliNB = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
resultsRandomForest = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
resultsSVM = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
resultsWisardThermometer = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
resultsWisardOneHot = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
for train, test in kf.split(X_train, y_train):
clfr = benchmark(GaussianNB(), X_train[train], y_train[train], X_train[test], y_train[test])
resultsGaussianNB['score'].append(clfr[1])
resultsGaussianNB['train_time'].append(clfr[2])
resultsGaussianNB['test_time'].append(clfr[3])
resultsGaussianNB['confusion_matrix'].append(clfr[4])
for train, test in kf.split(X_train, y_train):
clfr = benchmark(BernoulliNB(), X_train[train], y_train[train], X_train[test], y_train[test])
resultsBernoulliNB['score'].append(clfr[1])
resultsBernoulliNB['train_time'].append(clfr[2])
resultsBernoulliNB['test_time'].append(clfr[3])
resultsBernoulliNB['confusion_matrix'].append(clfr[4])
for train, test in kf.split(X_train, y_train):
clfr = benchmark(RandomForestClassifier(), X_train[train], y_train[train], X_train[test], y_train[test])
resultsRandomForest['score'].append(clfr[1])
resultsRandomForest['train_time'].append(clfr[2])
resultsRandomForest['test_time'].append(clfr[3])
resultsRandomForest['confusion_matrix'].append(clfr[4])
for train, test in kf.split(X_train, y_train):
clfr = benchmark(svm.LinearSVC(), X_train[train], y_train[train], X_train[test], y_train[test])
resultsSVM['score'].append(clfr[1])
resultsSVM['train_time'].append(clfr[2])
resultsSVM['test_time'].append(clfr[3])
resultsSVM['confusion_matrix'].append(clfr[4])
print('All results')
print('\n')
show_results(resultsGaussianNB, 'Naïve-Bayes Gaussian')
show_results(resultsBernoulliNB, 'Naïve-Bayes Bernoulli')
show_results(resultsRandomForest, 'Random Forest')
show_results(resultsSVM, 'SVM')
X_train = np.array(X)
y_train = np.array(y_enc)
kf = StratifiedKFold(n_splits=10)
resultsWisardOneHot = { 'score': [], 'train_time': [], 'test_time': [], 'confusion_matrix': [] }
X_train = np.array(X_onehot)
for train, test in kf.split(X_train, y_train):
clfr = benchmark(WiSARD(4, seed=random.randint(0, 2**32-1), ignore_zero_addr=True), X_train[train], y_train[train], X_train[test], y_train[test])
resultsWisardOneHot['score'].append(clfr[1])
resultsWisardOneHot['train_time'].append(clfr[2])
resultsWisardOneHot['test_time'].append(clfr[3])
resultsWisardOneHot['confusion_matrix'].append(clfr[4])
#show_results(resultsWisardThermometer, 'WiSARD with Thermometer')
show_results(resultsWisardOneHot, 'WiSARD with One Hot')
```
| github_jupyter |
<h1>Hi, welcome to introduction to python. This notebook is not a tutorial but a sort of refrence book. It mostly contains correct & official resources you should refer to when learning</h1>
In the part 1 of this tutorial we will go through:
1. Basics of python programming (operators, print, how to use collab/jupyter, etc)
2. Variable & Datatypes
3. Conditional statements, loops
4. Functions, Classes
```
print("welcome to BCS")
```
So, collab is fairly simple. [Colab](https://colab.research.google.com/notebooks/basic_features_overview.ipynb) allows anybody to write and execute python code through the browser itself. There is no need to install any library or create some environment, etc.
---
If you want to use your local machine to learn python, there are multiple ways to do so.
1. PyCharm
* install the latest version of [Python](https://www.python.org/downloads/) and add it to [windows path](https://www.geeksforgeeks.org/how-to-add-python-to-windows-path/), if not added bydefault. If you get stuck somewhere...google maybe. Or contact our secys @BCS [discord](https://discord.gg/HvgePBAT) server.
* Check if python is properly installed. For this open cmd then type python, you should be able to see something like this :

* If your installation is proper, you can Download [PyCharm](https://www.jetbrains.com/pycharm/download/#section=windows) community version. Follow these [instructions](https://www.jetbrains.com/help/pycharm/installation-guide.html) for installation - you will be good to go.// vscode too works just fine...but since we will be working with notebooks mostly, i recommend using jupyter
2. Anaconda
* Download [Anaconda](https://www.anaconda.com/products/individual). follow the [instructions](https://docs.anaconda.com/anaconda/install/windows/)
* Always create [ENVIRONMENTS](https://towardsdatascience.com/a-guide-to-conda-environments-bc6180fc533). Use pip to install libraries in that environment.
* Anaconda is not an IDE. its a distribution. Conda has its own ide - Spyder, its fairly good.
* if you get stuck, or cant follow - watch [this](https://www.youtube.com/watch?v=5mDYijMfSzs) and [this](https://www.youtube.com/watch?v=mIB7IZFCE_k)
```
2+2
```
Okay, so lets try basic arithmatic first, try creating a cell and then simply type 4+5 and run the cell. You will see 9 in the output. With this, we commence with our first python tutorial : ***Operators***
Operators are special symbols in Python that carry out arithmetic or logical computation. In the above cell '+' is the operator for addition. similarly '-' is the operator for substraction. [Here](https://www.w3schools.com/python/python_operators.asp) is the list of all operators. You dont have to remember them, google whenever needed. Now lets write a code for adding 2 numbers, and along the way, i will introduce you to the syntax, print statement and much more.
```
num1 = 2
num2 = 3
#create new variable to store the sum of number1 & number2
sum = num1 + num2
#print the sum
print('The sum of {0} and {1} is {2}'.format(num1, num2, sum))
```
num1, num2 & sum are the variables that store values of 2, 3 & 2+3 respectively. so that, if we want to find the sum of 4 & 5 we only have to change -
num1 = 4
num2 = 5
---
# What is the need of variable?
Variables are used to store information to be referenced and manipulated in a computer program. They also provide a way of labeling data with a descriptive name, so our programs can be understood more clearly by the reader and ourselves. so, in future when you will write bigger codes, you have the luxury to change a single variable to change the entire dynamics of the code, you dont have to go and change it everywhere.
---
# Print statement
print allows you to show the output on the screen. follow the next code block for more in this
```
2+3
#running this cell will give you 5
2+3
print("Python is ezz")
#but running this will not give you 5, It will read 'Python is ezz'. that is because print, prints the output to the screen. 2+3 = 5 will be stored in its memory
#but
print(2+3)
print("Python is ezz")
```
<h1> Variables, Data Types, Conditional statements, Loops, Functions & Classes</h1>
---
---
This tutorial is targeted to those, who are not familiar with Python and use of Pyhton for scientific computing.
## Hello World Example
```
# This is a comment
print("Helllo World")
```
## Variables and basic data types
Like other programming languages, Python also have similar basic data types such as integers, float, strings, etc. Unlike in C/ C++, you don't need any additional declaration of variables or their variable type, you can straight away start using. Examples given below will clear it for you.
### Numbers
```
# Integers
x = 2 # This will create new integer variable 'x' with value as 2
print(type(x))
print("x is:",x,"and x + 1 is:", x+1) # multiple print statements and addition of two integers
print("x multiply by 3 is:",x*3, "and x^3 is:", x**3) # multiplication and power of
x += 1 # Increment statement
print(x)
# Floats
x = 2.0 # This how you create a new float variable
print(type(x))
print("x is:",x,"and x + 1 is:", x+1)
print("x multiply by 3 is:",x*3, "and x^3 is:", x**3)
x += 1
print(x)
# Type Conversion
x = 2
print(x, type(x))
x += 1.0 # This will convert the integer data type to float data type
print(x, type(x))
```
### Strings
```
x = "bcs_iitk"
print(type(x))
print(x)
print(len(x))
y = "workshop april 2020"
z = x + " " + y # This is how you concatenate strings
print(z)
```
### Booleans
```
x = True
y = False
print(type(x))
print(x and y)
print(x or y)
print(not x)
```
## Lists and Dictionaries
### Lists
Lists are container which can contain several elements or values in one variable name. Quite similar to arrays in C/ C++ but with additional features.
#### Creating lists
```
x = [5, 1, 8] # Creating a list of 3 elements
print(x)
# Calling individual element
print("1st element:",x[0],"and 1st last element:", x[-1])
# change value
x[1] = 2
print("Change Value:",x)
# Add a new element
x.append(4)
print("new element:",x)
# Removing any element at specified index
print("Removed element:",x.pop(3), "| Left out list:",x)
y = [] # Initialising an empty list
print(y)
y.append(x.pop(0))
print("y:",y, "| x:",x)
```
#### Slicing
In python you can easily get the sublists using slicing.
```
x = [*range(10)] # short trick to create a list containing integers from 0 to 10
y = [*range(2,10,2)]
print(x)
print(y)
print(x[:5]) # from starting index to 4
print(x[5:]) # from 5 to last index
print(x[3:8]) # from 3 to 7
x[:3] = [*range(3,6)] # changing value of a slice
print(x)
```
### Dictionaries
Dictionary are variables which stores key-value pairs.
```
# 'Name', 'Branch' and 'Roll' are keys || 'Shashi', 'Electrical' and 160645 are there respective values
student_1 = {'Name': 'Shashi', 'Branch': 'Electrical', 'Roll': 160645}
print(student_1)
print(student_1['Name'])
print(student_1['Roll'])
# Adding new key-value pair
student_1['CPI'] = 'NA' # :-P
print(student_1)
```
## Conditional Statements (if-else)
Conditional statements are used for decision making. Unlike C/C++, there is no switch-case thing in Python.
```
x = 1
if x == 1:
print("x is 1")
else:
print("x is not 1")
x = 2
if x == 1:
print("x is 1")
else:
print("x is not 1")
x = 2
if x == 1:
print("x is 1")
elif x == 2:
print("x is 2")
else:
print("x is neither 1 nor 2")
x = 3
if x == 1:
print("x is 1")
elif x == 2:
print("x is 2")
else:
print("x is neither 1 nor 2")
```
## Loops
Loops are used to run same kind of lines multiple times.
```
# general loop
for i in range(5):
print(i)
print('------')
for i in range(1, 8, 2):
print(i)
# Looping over a list
x = [2, 5, 6]
for i in x:
print(i)
# list comprehension
x = [*range(5)]
x_sq = [i ** 2 for i in x]
print(x)
print(x_sq)
# while loop
x = 1
while x<5:
print(x)
x+=1
```
## Functions
Defined simply using ```def``` keyword. Can return as many variable as much you want. Can take optional keyword arguments as well.
```
# simple python function
def poly(poly_idx, x):
value = 0
l = len(poly_idx) - 1
for i in range(l+1):
value += poly_idx[i]*(x**(l-i))
return value
print("x^2 + x + 1 at x = 0 is:", poly([1, 1, 1], 0))
print("x^2 + x + 1 at x = 1 is:", poly([1, 1, 1], 1))
print("x^2 + x + 1 at x = 2 is:", poly([1, 1, 1], 2))
print("2x^2 + x + 1 at x = 2 is:", poly([2, 1, 1], 1))
# function which can return multiple var
def get_data():
return 1, 2, 3
x, y, z = get_data()
print(x, y, z)
# function with optional keyword argument
def greet(to_whom, greet_type="Hi"):
print(greet_type, "Shashi!")
greet("Shashi")
greet("Shashi", greet_type="Bye")
```
## Classes
```
class Person():
def __init__(self, name, roll):
self.name = name
self.roll = roll
def get_roll(self):
return self.roll
def greet(self, say_hi_bye="Hi"):
print(say_hi_bye, self.name)
p1 = Person(name="Shashi", roll=160645)
print(p1.get_roll())
p1.greet()
p1.greet(say_hi_bye="Bye")
```
<h1>In part 2 we will study about Matplotlib, Pandas, Numpy, Scipy. We will refer to some awesome online resources as they are used by all and are well trusted, you can also go by yourself and find some youtube videos and maybe share them with the rest of the attendees on our discord. But the ones mentioned here are my personal favourites </h1>
Numpy - [1](https://numpy.org/doc/stable/user/whatisnumpy.html) & [2](https://cs231n.github.io/python-numpy-tutorial/)
<br>Pandas - [1](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html)(this is a community guide, choose one you feel you are comfortable with)
<br>Matplotlib - [1](https://matplotlib.org/stable/tutorials/index.html#introductory) [2](https://scipy-lectures.org/intro/matplotlib/index.html)
<br>Scipy - [1](https://docs.scipy.org/doc/scipy/tutorial/index.html)
<br>some other good material that covers everything briefly - [Scipy Lecture Notes](http://scipy-lectures.org/index.html) & [6.86x - Introduction to ML Packages](https://github.com/Varal7/ml-tutorial/blob/master/Part1.ipynb)
| github_jupyter |
# scikit-learn Cookbook
This cookbook contains recipes for some common applications of machine learning. You'll need a working knowledge of [pandas](http://pandas.pydata.org/), [matplotlib](http://matplotlib.org/), [numpy](http://www.numpy.org/), and, of course, [scikit-learn](http://scikit-learn.org/stable/) to benefit from it.
```
# <help:cookbook_setup>
%matplotlib inline
```
##Training with k-Fold Cross-Validation
This recipe repeatedly trains a [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) classifier over different subsets (folds) of sample data. It attempts to match the percentage of each class in every fold to its percentage in the overall dataset ([stratification](http://en.wikipedia.org/wiki/Stratified_sampling)). It evaluates each model against a test set and collects the confusion matrices for each test fold into a `pandas.Panel`.
This recipe defaults to using the [Iris data set](http://en.wikipedia.org/wiki/Iris_flower_data_set). To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to the instance classes as human readable names.
```
# <help:scikit_cross_validation>
import warnings
warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them
import pandas
import sklearn
import sklearn.datasets
import sklearn.metrics as metrics
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import StratifiedKFold
# load the iris dataset
dataset = sklearn.datasets.load_iris()
# define feature vectors (X) and target (y)
X = dataset.data
y = dataset.target
labels = dataset.target_names
labels
# <help:scikit_cross_validation>
# use log reg classifier
clf = LogisticRegression()
cms = {}
scores = []
cv = StratifiedKFold(y, n_folds=10)
for i, (train, test) in enumerate(cv):
# train then immediately predict the test set
y_pred = clf.fit(X[train], y[train]).predict(X[test])
# compute the confusion matrix on each fold, convert it to a DataFrame and stash it for later compute
cms[i] = pandas.DataFrame(metrics.confusion_matrix(y[test], y_pred), columns=labels, index=labels)
# stash the overall accuracy on the test set for the fold too
scores.append(metrics.accuracy_score(y[test], y_pred))
# Panel of all test set confusion matrices
pl = pandas.Panel(cms)
cm = pl.sum(axis=0) #Sum the confusion matrices to get one view of how well the classifiers perform
cm
# <help:scikit_cross_validation>
# accuracy predicting the test set for each fold
scores
```
## Principal Component Analysis Plots
This recipe performs a [PCA](http://en.wikipedia.org/wiki/Principal_component_analysis) and plots the data against the first two principal components in a scatter plot. It then prints the [eigenvalues and eigenvectors of the covariance matrix](http://www.quora.com/What-is-an-eigenvector-of-a-covariance-matrix) and finally prints the precentage of total variance explained by each component.
This recipe defaults to using the [Iris data set](http://en.wikipedia.org/wiki/Iris_flower_data_set). To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to human-readable names of the classes.
```
# <help:scikit_pca>
import warnings
warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them
from __future__ import division
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets
import sklearn.metrics as metrics
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# load the iris dataset
dataset = sklearn.datasets.load_iris()
# define feature vectors (X) and target (y)
X = dataset.data
y = dataset.target
labels = dataset.target_names
# <help:scikit_pca>
# define the number of components to compute, recommend n_components < y_features
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
# plot the first two principal components
fig, ax = plt.subplots()
plt.scatter(X_pca[:,0], X_pca[:,1])
plt.grid()
plt.title('PCA of the dataset')
ax.set_xlabel('Component #1')
ax.set_ylabel('Component #2')
plt.show()
# <help:scikit_pca>
# eigendecomposition on the covariance matrix
cov_mat = np.cov(X_pca.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
# <help:scikit_pca>
# prints the percentage of overall variance explained by each component
print(pca.explained_variance_ratio_)
```
## K-Means Clustering Plots
This recipe performs a [K-means clustering](http://en.wikipedia.org/wiki/K-means_clustering) `k=1..n` times. It prints and plots the the within-clusters sum of squares error for each `k` (i.e., inertia) as an indicator of what value of `k` might be appropriate for the given dataset.
This recipe defaults to using the [Iris data set](http://en.wikipedia.org/wiki/Iris_flower_data_set). To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to human-readable names of the classes. To change the number of clusters, modify `k`.
```
# <help:scikit_k_means_cluster>
import warnings
warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them
from time import time
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets
from sklearn.cluster import KMeans
# load datasets and assign data and features
dataset = sklearn.datasets.load_iris()
# define feature vectors (X) and target (y)
X = dataset.data
y = dataset.target
# set the number of clusters, must be >=1
n = 6
inertia = [np.NaN]
# perform k-means clustering over i=0...k
for k in range(1,n):
k_means_ = KMeans(n_clusters=k)
k_means_.fit(X)
print('k = %d, inertia= %f' % (k, k_means_.inertia_ ))
inertia.append(k_means_.inertia_)
# plot the SSE of the clusters for each value of i
ax = plt.subplot(111)
ax.plot(inertia, '-o')
plt.xticks(range(n))
plt.title("Inertia")
ax.set_ylabel('Inertia')
ax.set_xlabel('# Clusters')
plt.show()
```
## SVM Classifier Hyperparameter Tuning with Grid Search
This recipe performs a [grid search](http://en.wikipedia.org/wiki/Hyperparameter_optimization) for the best settings for a [support vector machine,](http://en.wikipedia.org/wiki/Support_vector_machine) predicting the class of each flower in the dataset. It splits the dataset into training and test instances once.
This recipe defaults to using the [Iris data set](http://en.wikipedia.org/wiki/Iris_flower_data_set). To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to human-readable names of the classes. Modify `parameters` to change the grid search space or the `scoring='accuracy'` value to optimize a different metric for the classifier (e.g., precision, recall).
```
#<help_scikit_grid_search>
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets
import sklearn.metrics as metrics
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import label_binarize
# load datasets and features
dataset = sklearn.datasets.load_iris()
# define feature vectors (X) and target (y)
X = dataset.data
y = dataset.target
labels = dataset.target_names
# separate datasets into training and test datasets once, no folding
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#<help_scikit_grid_search>
#define the parameter dictionary with the kernels of SVCs
parameters = [
{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4, 1e-2], 'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]},
{'kernel': ['poly'], 'degree': [1, 3, 5], 'C': [1, 10, 100, 1000]}
]
# find the best parameters to optimize accuracy
svc_clf = SVC(C=1, probability= True)
clf = GridSearchCV(svc_clf, parameters, cv=5, scoring='accuracy') #5 folds
clf.fit(X_train, y_train) #train the model
print("Best parameters found from SVM's:")
print clf.best_params_
print("Best score found from SVM's:")
print clf.best_score_
```
## Plot ROC Curves
This recipe plots the [reciever operating characteristic (ROC) curve](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) for a [SVM classifier](http://en.wikipedia.org/wiki/Support_vector_machine) trained over the given dataset.
This recipe defaults to using the [Iris data set](http://en.wikipedia.org/wiki/Iris_flower_data_set) which has three classes. The recipe uses a [one-vs-the-rest strategy](http://scikit-learn.org/stable/modules/multiclass.html#one-vs-the-rest) to create the [binary classifications](http://en.wikipedia.org/wiki/Binary_classification) appropriate for ROC plotting. To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to human-readable names of the classes.
Note that the recipe adds noise to the iris features to make the ROC plots more realistic. Otherwise, the classification is nearly perfect and the plot hard to study. **Remove the noise generator if you use your own data!**
```
# <help:scikit_roc>
import warnings
warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets
import sklearn.metrics as metrics
from sklearn.svm import SVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import label_binarize
# load iris, set and data
dataset = sklearn.datasets.load_iris()
X = dataset.data
# binarize the output for binary classification
y = label_binarize(dataset.target, classes=[0, 1, 2])
labels = dataset.target_names
# <help:scikit_roc>
# add noise to the features so the plot is less ideal
# REMOVE ME if you use your own dataset!
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# <help:scikit_roc>
# split data for cross-validation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# classify instances into more than two classes, one vs rest
# add param to create probabilities to determine Y or N as the classification
clf = OneVsRestClassifier(SVC(kernel='linear', probability=True))
# fit estiamators and return the distance of each sample from the decision boundary
y_score = clf.fit(X_train, y_train).decision_function(X_test)
# <help:scikit_roc>
# plot the ROC curve, best for it to be in top left corner
plt.figure(figsize=(10,5))
plt.plot([0, 1], [0, 1], 'k--') # add a straight line representing a random model
for i, label in enumerate(labels):
# false positive and true positive rate for each class
fpr, tpr, _ = metrics.roc_curve(y_test[:, i], y_score[:, i])
# area under the curve (auc) for each class
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, label='ROC curve of {0} (area = {1:0.2f})'.format(label, roc_auc))
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.title('Receiver Operating Characteristic for Iris data set')
plt.xlabel('False Positive Rate') # 1- specificity
plt.ylabel('True Positive Rate') # sensitivity
plt.legend(loc="lower right")
plt.show()
```
## Build a Transformation and Classification Pipeline
This recipe builds a [transformation and training pipeline](http://scikit-learn.org/stable/modules/pipeline.html) for a model that can classify a snippet of text as belonging to one of 20 [USENET](http://en.wikipedia.org/wiki/Usenet) [newgroups](http://en.wikipedia.org/wiki/Usenet_newsgroup). It then prints the [precision, recall, and F1-score](http://en.wikipedia.org/wiki/Precision_and_recall) for predictions over a held-out test set as well as the confusion matrix.
This recipe defaults to using the [20 USENET newsgroup](http://kdd.ics.uci.edu/databases/20newsgroups/20newsgroups.html) dataset. To use your own data, set `X` to your instance feature vectors, `y` to the instance classes as a factor, and `labels` to human-readable names of the classes. Then modify the pipeline components to perform appropriate transformations for your data.
<div class="alert alert-block alert-warning" style="margin-top: 20px">**Warning:** Running this recipe with the sample data may consume a significant amount of memory.</div>
```
# <help:scikit_pipeline>
import pandas
import sklearn.metrics as metrics
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import Perceptron
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_20newsgroups
# download the newsgroup dataset
dataset = fetch_20newsgroups('all')
# define feature vectors (X) and target (y)
X = dataset.data
y = dataset.target
labels = dataset.target_names
labels
# <help:scikit_pipeline>
# split data holding out 30% for testing the classifier
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# pipelines concatenate functions serially, output of 1 becomes input of 2
clf = Pipeline([
('vect', HashingVectorizer(analyzer='word', ngram_range=(1,3))), # count frequency of words, using hashing trick
('tfidf', TfidfTransformer()), # transform counts to tf-idf values,
('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, n_iter=5))
])
# <help:scikit_pipeline>
# train the model and predict the test set
y_pred = clf.fit(X_train, y_train).predict(X_test)
# standard information retrieval metrics
print metrics.classification_report(y_test, y_pred, target_names=labels)
# <help:scikit_pipeline>
# show the confusion matrix in a labeled dataframe for ease of viewing
index_labels = ['{} {}'.format(i, l) for i, l in enumerate(labels)]
pandas.DataFrame(metrics.confusion_matrix(y_test,y_pred), index=index_labels)
```
<div class="alert" style="border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);">
<div class="row">
<div class="col-sm-1"><img src="https://knowledgeanyhow.org/static/images/favicon_32x32.png" style="margin-top: -6px"/></div>
<div class="col-sm-11">This notebook was created using <a href="https://knowledgeanyhow.org">IBM Knowledge Anyhow Workbench</a>. To learn more, visit us at <a href="https://knowledgeanyhow.org">https://knowledgeanyhow.org</a>.</div>
</div>
</div>
| github_jupyter |
## Write You Own Frontend Parser for `utensor_cgen`
- Goal: write a parser which can parse a txt file
- the format of the txt file
```
<op_name> <value>
```
- To simplify the tutorial, we only support `Const` operator
- the `value` field is a python expression, such as `[1, 2, 3]`
```
import os
import numpy as np
from utensor_cgen.frontend import FrontendSelector, Parser
from utensor_cgen.ir import OperationInfo, TensorInfo, uTensorGraph
from utensor_cgen.ir.converter import (AttrValueConverter,
GenericTensorConverterMixin)
from utensor_cgen.utils import topologic_order_graph
```
## The `Parser` Interface
- must overwrite `parse` method with signature `parse(model_file, output_nodes, *args, **kwargs)`
- that is, the first argument must be model file and the second arguement must be output nodes
- in some parser, `output_nodes` can be optional. It recommended to set it to `None`
- Register the parser with `FrontendSelector.register`
- the first argument of `FrontendSelector.register` should be a list of file extensions, such as `['.pb', '.pbtxt']`.
- In this tutorial, our target file is txt so the target extensions list is `['.txt']`
- If you register a parser to a file extension that has been registered with other parsers, an error will be raised. To disable this, you must explicit pass `overwrite=True` in the register decorator
```
@FrontendSelector.register(['.txt'], overwrite=True)
class TxtParser(Parser):
def parse(self, txt_file, output_nodes=None):
graph_name, _ = os.path.splitext(
os.path.basename(txt_file)
)
if output_nodes is None:
output_nodes = []
add_all_nodes = not output_nodes
ugraph = uTensorGraph(name=graph_name, output_nodes=output_nodes, lib_name='txtlib')
with open(txt_file, 'r') as fid:
# read lines
for line in fid:
try:
op_name, value = line.split(' ', maxsplit=1)
except Exception:
raise ValueError('invalid line: {}'.format(line))
value = np.array(eval(value))
# construct tensors
out_tensor = TensorInfo(
'{}:0'.format(op_name),
op_name,
dtype=value.dtype,
shape=list(value.shape),
ugraph=ugraph
)
# construct ops
op_info = OperationInfo(
name=op_name,
lib_name='txtlib',
ugraph=ugraph,
input_tensors=[],
output_tensors=[out_tensor],
op_type='Const',
op_attr={
"value": AttrValueConverter.GenericType(
value_name="tensor",
value=GenericTensorConverterMixin.GenericType(
np_array=value
),
)
}
)
if add_all_nodes:
ugraph.output_nodes.append(op_name)
# topologically sort the graph
# this will update `ugraph.topo_order`
topologic_order_graph(ugraph)
return ugraph
parser = TxtParser({})
ugraph = parser.parse('models/consts_model.txt')
ugraph
# simple visualization
from utensor_cgen.ir.misc.graph_viz import viz_graph
viz_graph(ugraph)
```
| github_jupyter |
# Freezing parameters
In this example, we demonstrate how the `filter_spec` argument of `equinox.filter_value_and_grad` can be used -- in this case, to only train some parameters and freeze the rest.
```
import functools as ft
import jax
import jax.numpy as jnp
import jax.random as jrandom
import optax # https://github.com/deepmind/optax
import equinox as eqx
# Toy data
def get_data(dataset_size, *, key):
x = jrandom.normal(key, (dataset_size, 1))
y = 5 * x - 2
return x, y
# Toy dataloader
def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
end = batch_size
while end < dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
```
Here, we:
1. Set up a model. In this case, an MLP.
2. Set up a `filter_spec`. This will be a PyTree of the same structure as the model, with `False` on every leaf -- except for the leaves corresponding to the final layer, which we set to `True`.
3. Specify how to make a step. In this case we'll specify that we're still going to JIT with respect to every array, but we're only going to differentiate the ones specified by `filter_spec`.
```
def main(
dataset_size=10000,
batch_size=256,
learning_rate=3e-3,
steps=1000,
width_size=8,
depth=1,
seed=5678,
):
data_key, loader_key, model_key = jrandom.split(jrandom.PRNGKey(seed), 3)
data = get_data(dataset_size, key=data_key)
data_iter = dataloader(data, batch_size, key=loader_key)
# Step 1
model = eqx.nn.MLP(
in_size=1, out_size=1, width_size=width_size, depth=depth, key=model_key
)
# Step 2
filter_spec = jax.tree_map(lambda _: False, model)
filter_spec = eqx.tree_at(
lambda tree: (tree.layers[-1].weight, tree.layers[-1].bias),
filter_spec,
replace=(True, True),
)
# Step 3
@eqx.filter_jit
@ft.partial(eqx.filter_value_and_grad, filter_spec=filter_spec)
def make_step(model, x, y):
pred_y = jax.vmap(model)(x)
return jnp.mean((y - pred_y) ** 2)
# And now let's train for a short while -- in exactly the usual way -- and see what
# happens. We keep the original model around to compare to later.
original_model = model
optim = optax.sgd(learning_rate)
opt_state = optim.init(model)
for step, (x, y) in zip(range(steps), data_iter):
value, grads = make_step(model, x, y)
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
print(
f"Parameters of first layer at initialisation:\n{jax.tree_leaves(original_model.layers[0])}\n"
)
print(
f"Parameters of first layer at end of training:\n{jax.tree_leaves(model.layers[0])}\n"
)
print(
f"Parameters of last layer at initialisation:\n{jax.tree_leaves(original_model.layers[-1])}\n"
)
print(
f"Parameters of last layer at end of training:\n{jax.tree_leaves(model.layers[-1])}\n"
)
```
As we'll see, the parameters of the first layer remain unchanged throughout training. Just the parameters of the last layer are trained.
```
main()
```
| github_jupyter |
```
%load_ext lab_black
# nb_black if running in jupyter
# hide
# from your_lib.core import *
# from ml-project-template.data import *
```
# Your project name
> One-sentence description of your project.
## About
Describe your project in a general level. What problem does it solve?
## Contents
Briefly describe the contents of your repository
## How to Install
Describe how to install your code. Be very through and include every single step of the process.
## How to Use / API
Describe how to use your code. Give code examples.
## Update Plan
How is your model and data kept up to date?
## Ethical Aspects
Can you recognize ethical issues with your ML project?
Is there a risk for bias, discrimination, violation of privacy or conflict with the local or global laws?
Could your results or algorithms be misused for malicious acts?
Can data or model updates include bias in your model?
How have you tackled these issues in your implementation?
## Contributing
> NOTE: Edit the hyperlink below to point to the CONTRIBUTING.md file of your repository
See [here](https://github.com/City-of-Helsinki/ml_project_template/blob/master/CONTRIBUTING.md) on how to contribute to this project.
## How to Cite this Work (optional)
If you are doing a research project, you can add bibtex and other citation templates here.
You can also get a doi for your code by adding it to a code archive,
so your code can be cited directly!
To cite this work, use:
@misc{authoryearfirstwordinheader,
title = ...
...
}
## Copyright
> NOTE: Edit the year and author below according to your project!
Copyright 2021 City-of-Helsinki. Licensed under the Apache License, Version 2.0 (the "License");
you may not use this project's files except in compliance with the License.
A copy of the License is provided in the LICENSE file in this repository.
> NOTE: If you are using this template for other than city of Helsinki projects, remove the Helsinki logo files `favicon.ico` and `company_logo.png` from `docs/assets/images/`.
# to remove remove helsinki logo and favicon:
git rm docs/assets/images/favicon.ico docs/assets/images/company_logo.png
git commit -m "removed Helsinki logo and favicon"
The Helsinki logo is a registered trademark, and may only be used by the city of Helsinki.
This project was built using [nbdev](https://nbdev.fast.ai/) on top of the city of Helsinki [ml_project_template](https://github.com/City-of-Helsinki/ml_project_template).
| github_jupyter |
<!--HEADER-->
[*Guia de aulas da disciplina de Modelagem Matemática*](https://github.com/rmsrosa/modelagem_matematica) *do* [*IM-UFRJ*](https://www.im.ufrj.br).
<!--NAVIGATOR-->
<a href="https://colab.research.google.com/github/rmsrosa/modelagem_matematica/blob/modmat2019p1/aulas/01.00-Aula1.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
<a href="https://mybinder.org/v2/gh/rmsrosa/modelagem_matematica/modmat2019p1?filepath=aulas/01.00-Aula1.ipynb"><img align="left" src="https://mybinder.org/badge.svg" alt="Open in binder" title="Open and Execute in Binder"></a>
<a href="https://nbviewer.jupyter.org/github/rmsrosa/modelagem_matematica/blob/modmat2019p1/aulas/01.00-Aula1.slides.html"><img align="left" src="https://rmsrosa.github.io/jupyterbookmaker/badges/slides_badge.svg" alt="Open slides" title="Open and View Slides"></a>
[<- Página Inicial](00.00-Pagina_Inicial.ipynb) | [Página Inicial](00.00-Pagina_Inicial.ipynb) | [Aula 2: Analisando a oscilação de um pêndulo ->](02.00-Aula2.ipynb)
---
# Aula 1: Recordando python, jupyter, mínimos quadrados, regressão linear e ajuste de parâmetros
**Data:** ---
**Local:** Google Classroom
**Professores:**
- Adriano Côrtes
- *email:* <adriano.cortes@matematica.ufrj.br>
- Alejandro Cabrera
- *gabinete:* C–125A
- *email:* <alejandro@matematica.ufrj.br>
- *página:* http://www.dma.im.ufrj.br/~acabrera/
- Ricardo Rosa
- *gabinete:* C-113B
- *email:* <rrosa@im.ufrj.br>
- *página:* http://www.dma.im.ufrj.br/~rrosa
## O material do curso no github
Disponível no repositório [rmsrosa/modelagem_matematica](https://www.github.com/rmsrosa/modelagem_matematica)
- ~"Baixar" o repositório inteiro, com todas os "cadernos" Jupyter;~
- ~"Baixar" cada "caderno" individualmente;~
- Interagir com os cadernos localmente via `jupyter lab` ou `jupyter notebook`;
- ~Visualizar os cadernos no próprio github;~
- Visualizar e interagir com os cadernos na nuvem:
- [Binder](https://mybinder.org); ou
- [Google Colab](https://colab.research.google.com/notebooks/welcome.ipynb).
- Outras referências:
- G. Ledder, "Mathematics for the Life Sciences", Springer, 2013. Capítulo 2
## Avaliação
- Listas
- Exercícios nos Notebooks
- Projeto final
## O ambiente Jupyter
Há vários ambientes semelhantes:
- [Jupyter lab](https://jupyterlab.readthedocs.io/en/stable/): Roda localmente e depende do ambiente python instalado na máquina e dos pacotes disponíveis no ambiente
- [Jupyter notebook](https://jupyter.org): semelhante ao jupyter lab
- [Binder](https://mybinder.org): Cria um ambiente customizado a cada vez que é acionado e apenas com os pacotes definidos no arquivo `requirements.txt` do repositório
- [Google Colab](https://colab.research.google.com/notebooks/welcome.ipynb): Já tem o seu ambiente python próprio, com diversos pacotes pré-instalados, e carrega mais rápido do que o binder
### Jupyter instalado localmente
No terminal, em um diretório contendo as notas, execute o comando
```bash
jupyter lab
```
ou
```bash
jupyter notebook
```
Uma janela, ou aba, se abrirá no navegador, com o ambiente Jupyter.
Se quiser usar um navegador diferente do padrão, basta usar o argumento `--browser`, e.g.:
```bash
jupyter lab --browser chrome
```
### Jupyter na nuvem via binder ou colab
No `README.md` do repositório e em cada "notebook" há um link para abrir a página na nuvem. Por exemplo, os links abaixo abrem esta página em um desses dos ambientes de computação em nuvem:
<a href="https://colab.research.google.com/github/rmsrosa/modelagem_matematica/blob/master/notebooks/01.00-Aula1.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
<a href="https://mybinder.org/v2/gh/rmsrosa/modelagem_matematica/master?filepath=notebooks/01.00-Aula1.ipynb"><img align="left" src="https://mybinder.org/badge.svg" alt="Open in binder" title="Open and Execute in Binder"></a>
**Obs:** Se estiver visualizando um caderno jupyter pelo *github*, clicar com o **botão direito** do *mouse*
---
## Os resumões de comandos (*Cheat Sheet*)
#### Nos exemplos que seguem e durante o curso é bom ter por perto os arquivos:
- [Numpy Python Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf)
- [Scipy Python Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_SciPy_Cheat_Sheet_Linear_Algebra.pdf)
- [Matplotlib Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf)
---
## Algumas Ferramentas de Álgebra Linear
### 1. Norma, Raiz da Média Quadrática e outras medidas
Dado um vetor $\mathbf{x} \in \mathbb{R}^n$ denoteremos sua norma Euclidiana (também dita norma 2) por $||\mathbf{x}||$. Porém a norma não é uma boa medida quando queremos comparar vetores de dimensões distintas. Por exemplo, denotando por $\mathbf{1}$ o vetor de *uns* em $\mathbb{R}^n$, isto é, $\mathbf{1} = (1,1,\ldots,1) \in \mathbb{R}^n$, temos que $||\mathbf{1}|| = \sqrt{n}$, ou seja, a sua norma é dependente da dimensão. Uma medida relacionada a norma é a **raiz da média quadrática** de $\mathbf{x}$ (*root mean square - RMS*) definida por
$$
\textbf{rms}(\mathbf{x}) = \sqrt{\dfrac{x_1^2 + \cdots + x_n^2}{n}} = \dfrac{||\mathbf{x}||}{\sqrt{n}}.
$$
Observe agora que $\textbf{rms}(\mathbf{1}) = 1$, independente da dimensão, portanto sentimos que o valor da $\textbf{rms}$ do vetor $\mathbf{x}$ é uma melhor medida quando se trata da comparação de vetores de dimensões diferentes, com efeito, a raiz da média quadrática nos dá a informação de qual é "tipicamente" o valor absoluto das componentes do vetor, i.e., $|x_i|$.
No `numpy` a norma é uma função do submódulo `linalg`, e a raiz da média quadrática precisa ser implementada como uma função. Façamos agora uns experimentos numéricos (código abaixo) para ganhar uma sensação da informação que a $\textbf{rms}$ de um vetor nos fornece. Consideremos um vetor aleatório em $\mathbf{x} \in \mathbb{R}^{100}$ amostrado da distribuição normal $\mathcal{N}(10,50)$. Calculamos a $\textbf{rms}$ desse vetor e plotamos um histograma dos valores absolutos das componentes do vetor. Os bins do histograma são definidos como $[0,\textbf{rms}(\mathbf{x}))$, $[\textbf{rms}(\mathbf{x}), 2\textbf{rms}(\mathbf{x}))$, $[2\textbf{rms}(\mathbf{x}), 3\textbf{rms}(\mathbf{x}))$, $[3\textbf{rms}(\mathbf{x}), 4\textbf{rms}(\mathbf{x}))$, $[4\textbf{rms}(\mathbf{x}), 5\textbf{rms}(\mathbf{x}))$. Note que a maioria dos valores se concentram no primeiro e segundo bins, e isso é explicado pela *Desigualdade de Chebyshev*.
Suponha que o vetor $\mathbf{x} \in \mathbb{R}^n$ tenha $k$ entradas satisfazendo $|x_i| \geq a$ com $a > 0$, logo $x_i^2 \geq a^2$, e segue que $||\mathbf{x}||^2 = x_1^2 + \cdots + x_n^2 \geq ka^2$, uma vez que $k$ números na soma são maiores ou iguais do que $a^2$, e os $n-k$ restantes são não-negativos. Concluí-se assim que
$$
k \leq \dfrac{||\mathbf{x}||^2}{a^2},
$$
que é chamada a *Desigualdade de Chebyshev*. Finalmente a desigualdade é mais facilmente interpretada em termos do RMS do vetor, com efeito, sendo $k$ o mesmo que antes, podemos escrever
$$
\dfrac{k}{n} \leq \left( \dfrac{\textbf{rms}(\mathbf{x})}{a} \right)^2.
$$
O lado esquerdo da desigualdade acima é a fração de entradas do vetor que são maiores os iguais a $a$ em valor absoluto. Por exemplo, se fizermos $a = 2 \cdot \textbf{rms}(\mathbf{x})$, então
$$
\dfrac{k}{n} \leq \left( \dfrac{\textbf{rms}(\mathbf{x})}{2 \cdot \textbf{rms}(\mathbf{x})} \right)^2 = \left( \dfrac{1}{2} \right)^2 = \dfrac{1}{4} = 25\%.
$$
Isto é, não mais que $25\%$ das entradas (em valor absoluto) podem exceder o dobro do valor do RMS. Igualmente, para $a = 4 \cdot \textbf{rms}(\mathbf{x})$, teremos que não mais que $6,25\%$ das entradas (em valor absoluto) podem exceder o valor do RMS por um fator 4.
```
import numpy as np
import matplotlib.pyplot as plt
x = np.random.normal(10,50,100); #print(x,'\n')
norm = np.linalg.norm(x); #print(norm)
def rms(x):
norm = np.linalg.norm(x)
scale = np.sqrt(1/len(x))
return scale*norm
rms = rms(x); print('O rms do vetor x é {0:4.4f}.\n'.format(rms))
rms_bins = rms*np.arange(0,6); #print(rms_bins)
count, bins, ignored = plt.hist(abs(x), rms_bins, density=False)
print(bins,'\n')
print(count,'\n')
```
**Observação:** No livro do Boyd e Vandenberghe
=========================================================================================
### 2. Sistemas de equações algébricas lineares
Podemos resolver sistemas de equações lineares com o pacote `numpy`:
$$ \begin{cases}
x + y - z = 1, \\
x - 2y + 3z = 3, \\
x + z = 2,
\end{cases}
$$
```
mat = np.array([ [1.0, 1.0, -1.0], [1.0, -2.0, 3.0], [1.0, 0.0, 1.0] ]);print(mat, '\n')
b = np.array([ [1], [3], [2] ]);print(b, '\n')
x = np.linalg.solve(mat,b)
print('A solução do sistema é \n',x)
```
### E quando a matriz é singular?
Se a matriz associada for singular (sem solução ou múltiplas soluções) ou se não for quadrada, obtemos um erro.
Podemos resolver via escalonamento. o `numpy` fornece uma decomposição LU. O `sympy`, também, de resolução simbólica.
No caso de um sistema sem solução, e.g. quando há muitas equações e poucas incógnitas, é útil usar mínimos quadrados e buscar a "melhor" solução, ou a solução "mais próxima".
```
mat = np.array([[1.0, 1.0, -1.0], [1.0, -2.0, 3.0], [2.0, -4.0, 6.0]])
b = np.array([[1],[3], [6]])
x = np.linalg.solve(mat,b)
print('A solução do sistema é\n', x)
```
Ou, para
$$ \begin{cases}
x + y - z = 1, \\
x - 2y + 3z = 3, \\
\end{cases}
$$
```
mat = np.array([[1.0, 1.0, -1.0], [1.0, -2.0, 3.0]])
b = np.array([[1],[3]])
x = np.linalg.solve(mat,b)
print('A solução do sistema é\n', x)
```
=========================================================================================
### 3. Mínimos quadrados
Considere o seguinte problema, que não tem solução exata
$$ \begin{cases}
x + y = 1, \\
x - 2y = 3, \\
x - y = 2,
\end{cases}
$$
Procuramos, então, resolver o problema
$$
\min_{\mathbf{u}\in \mathbb{R}^2} \|A\mathbf{u} - \mathbf{b}\|_2,
$$
onde
$$ A = \left[ \begin{matrix} 1 & 1 \\ 1 & -2 \\ 1 & -1 \end{matrix}\right], \qquad \mathbf{b} = \left( \begin{matrix} 1 \\ 3 \\ 2 \end{matrix} \right).
$$
```
mat = np.array([[1.0, 1.0], [1.0, -2.0], [1.0, -1.0]])
b = np.array([[1],[3], [2]])
x = np.linalg.lstsq(mat,b, rcond=None)[0]
print(x)
```
### Mínimos quadrados em regressão linear
Uma utilização corriqueira do método de mínimos quadrados é em regressão linear, onde buscamos ajustar um conjunto de dados a uma função. Vamos construir dados sintéticos perturbando aleatoriamente dados em uma determinada reta. A perturbação aleatória é feita com o método [numpy.random.rand()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.rand.html).
```
#import numpy as np
#import matplotlib.pyplot as plt
num_points = 10
m = 0.5
c = 1.2
x = np.array(range(num_points)) + 0.1*np.random.rand(num_points)
y = c*np.ones(num_points) + m*x + np.random.rand(num_points)
print(x)
print(y)
```
### Gráfico
```
plt.figure(figsize=(12,6))
plt.plot(x, y, 'o')
plt.show()
```
### Resolvendo via mínimos quadrados
Queremos achar $m$ e $c$ tais que os pontos $(x_j,\tilde y_j)$ determinados pela reta $\tilde y = mx + c$ sejam uma "melhor" aproximação possível para os dados $(x_j,y_j)$. Interpretamos esse "melhor" como sendo no sentido de "mínimos quadrados". Para isso, precisamos resolver
$$
\displaystyle \min_{\mathbf{u}\in \mathbb{R}^2} \|A\mathbf{u} - \mathbf{b}\|_2,
$$
onde
$$ A = \left[ \begin{matrix} x_1 & 1 \\ \vdots & 1 \\ x_n & 1 \end{matrix}\right], \qquad \mathbf{u} = \left( \begin{matrix} m \\ c \end{matrix}\right), \qquad \mathbf{b} = \left( \begin{matrix} y_1 \\ \vdots \\ y_n \end{matrix} \right).
$$
Utilizamos o método [numpy.vstack()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html) para montar a matriz de Vandermonde e o método [numpy.linagl.lstsq()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html) para resolver o problema de mínimos quadrados:
```
A = np.vstack([x,np.ones(num_points)]).T
m1, c1 = np.linalg.lstsq(A, y, rcond=None)[0]
print(m1,c1)
```
### Visualização
```
import matplotlib.pyplot as plt
plt.plot(x, y, 'o', label='Dados')
plt.plot(x, m1*x + c1, 'r', label='Aproximação linear')
plt.legend()
plt.show()
```
## Propriedades termodinâmicas da água pura
Dados no arquivo `dados/retirados do livro "An Introduction to Fluid Dynamics" do G. Batchelor.
```
from os import path
import csv
import numpy as np
import pandas as pd
arquivo = path.join('..', 'dados', 'agua', 'water_properties_from_Batchelor.csv')
agua1 = list(csv.reader(open(arquivo, "r"), delimiter=","))
agua2 = np.array(agua1[2:]).astype("float")
agua3 = np.loadtxt(open(arquivo, "rb"), delimiter=",", skiprows=2)
agua4 = pd.read_csv(arquivo, header=[0,1])
agua4.head()
```
### Gráfico
```
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel(agua1[1][0])
ax1.set_ylabel( agua1[1][1], color=color)
ax1.plot(agua3[:,0], agua3[:,1], 'o', color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel(agua1[1][2], color=color)
ax2.plot(agua3[:,0], agua3[:,2], 'o', color=color)
ax2.tick_params(axis='y', labelcolor=color)
plt.show()
```
## Aproximação quadrática
Buscamos uma melhor aproximação para a densidade em função da temperatura.Talvez uma aproximação de segunda ordem $y = ax^2 + bx + c$. Como ela é linear nos coeficientes $(a,b,c)$, ainda é um problema de **regressão linear** e também pode ser facilmente resolvido via **método de mínimos quadrados**.
Precisamos minimizar o resíduo $\| A\mathbf{u} - \mathbf{y}\|$, onde
$$ A = \left[ \begin{matrix} x_1^2 & x_1 & 1 \\ \vdots & \vdots & 1 \\ x_n^2 & x_n & 1 \end{matrix}\right], \qquad \mathbf{u} = \left( \begin{matrix} a \\ b \\ c \end{matrix}\right), \qquad \mathbf{b} = \left( \begin{matrix} y_1 \\ \vdots \\ y_n \end{matrix} \right).
$$
```
x = agua3[:,0]
A = np.vstack([x**2, x,np.ones(len(x))]).T
print(A)
```
### Solução
```
y = agua3[:,1]
a, b, c = np.linalg.lstsq(A, y, rcond=None)[0]
print(a,b,c)
```
### Visualização
```
import matplotlib.pyplot as plt
plt.plot(x, y, 'o', label='Dados')
plt.plot(x, a*x**2 + b*x + c, 'r', label='Aproximação quadrática lsq')
plt.legend()
plt.show()
len(x)
```
15 parâmetros aproximados por apenas 3!
### Comparando com aproximação linear
```
A1 = np.vstack([x,np.ones(len(x))]).T
m, d = np.linalg.lstsq(A1, y, rcond=None)[0]
print(m,d)
import matplotlib.pyplot as plt
plt.plot(x, y, 'o', label='Dados')
plt.plot(x, a*x**2 + b*x + c, 'r', label='Aproximação quadrática lsq')
plt.plot(x, b*x + c, 'g', label='Aproximação linear desprezando a')
plt.plot(x, m*x + d, 'b', label='Aproximação linear lsq')
plt.legend()
plt.show()
```
## Erros quadráticos - resíduos
O **erro quadrático** é o valor de $\|A\mathbf{x} - \mathbf{y}\|_2^2$ para a melhor aproximação encontrada, que é a soma do quadrado dos resíduos. O **resíduo** é o erro
$$ r_j = (A\mathbf{x})_j - y_j
$$
de cada medição.
Vamos comparar os erros quadráticos de cada aproximação:
```
print('Erro quadrático da aproximação linear:', np.linalg.lstsq(A1, y, rcond=None)[1][0])
print('Erro quadrático aproximação de segunda ordem:', np.linalg.lstsq(A, y, rcond=None)[1][0])
```
## Exercícios:
1. Calcule aproximações de **segunda** e **terceira** ordens para os dados sintéticos do primeiro exemplo de regressão linear e exiba os gráficos dessas aproximações, em conjunto com os dados.
1. Calcule aproximações polinomiais de **terceira** e **quarta** ordens para a **densidade** e verifique o erro residual.
1. Ache uma "boa" aproximação polinomial para os dados da **viscosidade** da água pura, para temperaturas entre 5 e 100C (i.e. descartando o dado em 0C).
---
### Apresentação em slides
Para visualizar na forma de slides, abrir um "terminal" *bash* e usar a linha de comando
```bash
jupyter nbconvert 01.00-Aula3.ipynb --to slides --post serve
```
sso abrirá uma aba, no seu navegador padrão, com o endereço
```
http://127.0.0.1:8000/01.00-Aula1.slides.html#/
```
Se quiser adicionar a possibilidade de rolar a página, caso algum *slide* seja muito longo, incluir a opção
```bash
--SlidesExporter.reveal_scroll=True
```
Para usar um tema diferente do padrão, temos a configuração via `SlidesExporter.reveal_theme`. Os temas disponíveis são `beige`, `black`, `blood`, `league`, `moon`, `night`, `serif`, `simple`, `sky`, `solarized`, `white`. O padrão é `white`. Um tema interessante é
```bash
--SlidesExporter.reveal_theme=solarized
```
Para usar um efeito de transição diferente do padrão, temos a configuração via `SlidesExporter.reveal_transition`. As opções são `none`, `fade`, `slide`, `convex`, `concave` e `zoom`. O padrão é `slide`. Uma transição interessante é
```bash
--SlidesExporter.reveal_transition=convex
```
Mais informações sobre o `nbconvert` em [Configuration options (for nbconvert)](https://nbconvert.readthedocs.io/en/latest/config_options.html) e [Present Your Data Science Projects with Jupyter Notebook Slides!](https://medium.com/learning-machine-learning/present-your-data-science-projects-with-jupyter-slides-75f20735eb0f)
Para salvar a apresentação em pdf, não funciona "imprimir" essa página, pois ela só imprimirá slides individuais da apresentação. Para salvar a apresentação completa em pdf, uma opção é alterar o final do endereço acima, incluindo `/print-pdf` antes do final `#/`, ficando o endereço da forma
```
http://127.0.0.1:8000/01.00-Aula1.slides.html?print-pdf#/
```
```
!jupyter nbconvert 01.00-Aula1.ipynb --to slides
```
<!--NAVIGATOR-->
---
[<- Página Inicial](00.00-Pagina_Inicial.ipynb) | [Página Inicial](00.00-Pagina_Inicial.ipynb) | [Aula 2: Analisando a oscilação de um pêndulo ->](02.00-Aula2.ipynb)
<a href="https://colab.research.google.com/github/rmsrosa/modelagem_matematica/blob/modmat2019p1/aulas/01.00-Aula1.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
<a href="https://mybinder.org/v2/gh/rmsrosa/modelagem_matematica/modmat2019p1?filepath=aulas/01.00-Aula1.ipynb"><img align="left" src="https://mybinder.org/badge.svg" alt="Open in binder" title="Open and Execute in Binder"></a>
<a href="https://nbviewer.jupyter.org/github/rmsrosa/modelagem_matematica/blob/modmat2019p1/aulas/01.00-Aula1.slides.html"><img align="left" src="https://rmsrosa.github.io/jupyterbookmaker/badges/slides_badge.svg" alt="Open slides" title="Open and View Slides"></a>
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use `numpy functions` and `numpy matrix/vector operations`
- Understand the concept of __"broadcasting"__
- Be able to `vectorize` code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## What you will do in this tutorial:
1. Building basic functions with numpy
1.1. sigmoid function, np.exp()
1.2. Sigmoid gradient
1.3. Reshaping arrays
1.4. Normalizing rows
1.5. Broadcasting and the softmax function
2. Vectorization
2.1. Implement the L1 and L2 loss functions
## 1) Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several _key_ numpy functions such as:
- __np.exp__,
- __np.log__, and
- __np.reshape__.
You will need to know how to use these functions for future assignments.
### 1.1. Sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the __sigmoid function__. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the `sigmoid` of a real number `x`. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the __logistic function__. It is a _non-linear_ function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using `<package_name>.<function()>`. Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
# x is a row vector
print ("x shape: ", x.shape)
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the __same size__ as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
print ("or:")
print (1/x)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function `sigmoid_grad()` to compute the gradient of the sigmoid function with respect to its input `x`.
The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a _better performance_ because ___gradient descent converges faster after normalization___. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.dot(y - yhat, y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
```
import os
random_seed = 42
import csv
import random
random.seed(random_seed)
import numpy as np
np.random.seed(random_seed)
import pandas as pd
pd.set_option('max_colwidth', 256)
data_path = "data/data/"
train = pd.read_json(data_path + 'train.jsonl', lines=True)
train.head()
train = train[['label', 'document', 'idx']]
train['idx'] = range(len(train))
train.columns = ['label', 'sentence', 'sample_index']
train.head()
train['label'].value_counts()
train.info()
train.to_csv('data/train.csv', index=False)
dev = pd.read_json(data_path + 'dev.jsonl', lines=True)
dev = dev[['label', 'document', 'idx']]
dev['idx'] = range(len(dev))
dev.columns = ['label', 'sentence', 'sample_index']
dev.head()
dev['label'].value_counts()
dev.info()
dev.to_csv('data/dev.csv', index=False)
test = pd.read_json(data_path + 'test.jsonl', lines=True)
test = test[['label', 'document', 'idx']]
test['idx'] = range(len(test))
test.columns = ['label', 'sentence', 'sample_index']
test.head()
test['label'].value_counts()
test.info()
test.to_csv('data/test.csv', index=False)
total = len(train)
for percentage in range(0, 100, 10):
k = int(total*(percentage/100))
print(percentage, k)
tmp = train.sample(k,
random_state=0
)
tmp = train.drop(tmp.index)
print(tmp['label'].value_counts())
filename = "data/random_0/{}.csv".format(percentage)
os.makedirs(os.path.dirname(filename), exist_ok=True)
tmp[['label', 'sentence', 'sample_index']].to_csv(filename, index=False)
total = len(train)
for percentage in range(0, 100, 10):
k = int(total*(percentage/100))
print(percentage, k)
tmp = train.sample(k,
random_state=2
)
tmp = train.drop(tmp.index)
print(tmp['label'].value_counts())
filename = "data/random_2/{}.csv".format(percentage)
os.makedirs(os.path.dirname(filename), exist_ok=True)
tmp[['label', 'sentence', 'sample_index']].to_csv(filename, index=False)
total = len(train)
for percentage in range(0, 100, 10):
k = int(total*(percentage/100))
print(percentage, k)
tmp = train.sample(k,
random_state=42
)
tmp = train.drop(tmp.index)
print(tmp['label'].value_counts())
filename = "data/random/{}.csv".format(percentage)
os.makedirs(os.path.dirname(filename), exist_ok=True)
tmp[['label', 'sentence', 'sample_index']].to_csv(filename, index=False)
```
| github_jupyter |
# Kaggle Comptetion: PLasTiCC LSST Full Demo
This notebook demos the 8th place solution (8/1094) of Rapids.ai for the __[PLAsTiCC Astronomical Classification](https://www.kaggle.com/c/PLAsTiCC-2018/leaderboard)__. The demo shows up to 140x speedup for ETL and 25x end-to-end speedup over the CPU solution. More details can be found at our __[blog](https://medium.com/rapids-ai/make-sense-of-the-universe-with-rapids-ai-d105b0e5ec95)__
**Note: this notebook is here for archival purposes and is not intended to illustrate best practices. [Please use this updated PLAsTiCC notebook, shown at KDD 2019 instead](conference_notebooks/KDD_2019/plasticc)**
```
import os
GPU_id = 0
os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_id)
import cudf as gd
import pandas as pd
import numpy as np
import math
import xgboost as xgb
import seaborn as sns
from functools import partial
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from termcolor import colored
from cudf_workaround import cudf_groupby_aggs
import matplotlib.pyplot as plt
import time
import warnings
warnings.filterwarnings("ignore")
sns.set()
print('cudf version',gd.__version__)
```
## Table of contents
[1. Global variables](#global)<br>
[2. Functions](#func)<br>
[3. ETL & Visualizations](#etl)<br>
[4. Model training](#train)<br>
[5. Conclusions](#conclusions)
<a id="global"></a>
## 1. Global variables
**Original data download and description __[link](https://www.kaggle.com/c/PLAsTiCC-2018/data)__**.
```
PATH = '../data'
#PATH = '/raid/data/ml/lsst/input'
#PATH = '../lsst/input'
```
**Tested on V100 with 32 GB GPU memory. If memory capacity is smaller, the input data will be sampled accordingly.**
```
GPU_MEMORY = 32 # GB.
#GPU_MEMORY = 16 # GB. Both 32 and 16 GB have been tested
TEST_ROWS = 453653104 # number of rows in test data
# no skip if your gpu has 32 GB memory
# otherwise, skip rows porportionally
OVERHEAD = 1.2 # cudf 0.7 introduces 20% memory overhead
SKIP_ROWS = int((1 - GPU_MEMORY/(32.0*OVERHEAD))*TEST_ROWS)
GPU_RUN_TIME = {}
CPU_RUN_TIME = {}
GPU_id = 0
os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_id)
```
<a id="func"></a>
## 2. Functions
```
def scatter(x,y,values,xlabel='x',ylabel='y',title=None):
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']
colors = np.array([colors[i] for i in values])
ps = []
bs = []
bands = ['passband_%s'%i for i in ['u', 'g', 'r', 'i', 'z','y']]
for i in sorted(np.unique(values)):
mask = values==i
if len(x[mask]):
p = plt.scatter(x[mask],y[mask],c=colors[mask])
ps.append(p)
bs.append(bands[i])
plt.legend(ps,bs,scatterpoints=1)
if title is not None:
plt.title(title)
plt.xlim([np.min(x)-10,np.min(x)+1500])
plt.ylabel('y: %s'%ylabel)
plt.xlabel('x: %s'%xlabel)
def multi_weighted_logloss(y_true, y_preds, classes, class_weights):
"""
refactor from
@author olivier https://www.kaggle.com/ogrellier
multi logloss for PLAsTiCC challenge
"""
y_p = y_preds.reshape(y_true.shape[0], len(classes), order='F')
y_ohe = pd.get_dummies(y_true)
y_p = np.clip(a=y_p, a_min=1e-15, a_max=1 - 1e-15)
y_p_log = np.log(y_p)
y_log_ones = np.sum(y_ohe.values * y_p_log, axis=0)
nb_pos = y_ohe.sum(axis=0).values.astype(float)
class_arr = np.array([class_weights[k] for k in sorted(class_weights.keys())])
y_w = y_log_ones * class_arr / nb_pos
loss = - np.sum(y_w) / np.sum(class_arr)
return loss
def xgb_multi_weighted_logloss(y_predicted, y_true, classes, class_weights):
loss = multi_weighted_logloss(y_true.get_label(), y_predicted,
classes, class_weights)
return 'wloss', loss
```
### CPU ETL functions
```
def ravel_column_names(cols):
d0 = cols.get_level_values(0)
d1 = cols.get_level_values(1)
return ["%s_%s"%(i,j) for i,j in zip(d0,d1)]
def etl_cpu(df,df_meta):
df['flux_ratio_sq'] = np.power(df['flux'] / df['flux_err'], 2.0)
df['flux_by_flux_ratio_sq'] = df['flux'] * df['flux_ratio_sq']
aggs = {
'passband': ['mean'],
'flux': ['min', 'max', 'mean'],
'flux_err': ['min', 'max', 'mean'],
'detected': ['mean'],
'mjd':['max','min'],
'flux_ratio_sq':['sum'],
'flux_by_flux_ratio_sq':['sum'],
}
agg_df = df.groupby('object_id').agg(aggs)
agg_df.columns = ravel_column_names(agg_df.columns)
agg_df['flux_diff'] = agg_df['flux_max'] - agg_df['flux_min']
agg_df['flux_dif2'] = (agg_df['flux_max'] - agg_df['flux_min']) / agg_df['flux_mean']
agg_df['flux_w_mean'] = agg_df['flux_by_flux_ratio_sq_sum'] / agg_df['flux_ratio_sq_sum']
agg_df['flux_dif3'] = (agg_df['flux_max'] - agg_df['flux_min']) / agg_df['flux_w_mean']
agg_df['mjd_diff'] = agg_df['mjd_max'] - agg_df['mjd_min']
agg_df = agg_df.drop(['mjd_max','mjd_min'],axis=1)
agg_df = agg_df.reset_index()
df_meta = df_meta.drop(['ra','decl','gal_l','gal_b'],axis=1)
df_meta = df_meta.merge(agg_df,on='object_id',how='left')
return df_meta
```
### GPU ETL functions
```
# To save GPU memory, we drop the column as soon as it is done with groupby
# this hits performance a little but avoids GPU OOM.
def groupby_aggs(df,aggs,col):
res = None
for i,j in aggs.items():
for k in j:
#print(i,k)
tmp = df.groupby(col,as_index=False).agg({i:[k]})
if res is None:
res = tmp
else:
res = res.merge(tmp,on=[col],how='left')
df.drop_column(i)
return res
def etl_gpu(df,df_meta):
aggs = {
'passband': ['mean'],
'detected': ['mean'],
'mjd':['max','min'],
}
agg_df = groupby_aggs(df,aggs,'object_id')
# at this step, columns ['passband','detected','mjd'] are deleted
df['flux_ratio_sq'] = df['flux'] / df['flux_err']
df['flux_ratio_sq'] = df['flux_ratio_sq'].applymap(lambda x: math.pow(x,2))
df['flux_by_flux_ratio_sq'] = df['flux'] * df['flux_ratio_sq']
aggs2 = {
'flux_ratio_sq':['sum'],
'flux_by_flux_ratio_sq':['sum'],
'flux': ['min', 'max', 'mean'],
'flux_err': ['min', 'max', 'mean'],
}
agg_df2 = groupby_aggs(df,aggs2,'object_id')
agg_df = agg_df.merge(agg_df2,on=['object_id'],how='left')
del agg_df2
agg_df['flux_diff'] = agg_df['max_flux'] - agg_df['min_flux']
agg_df['flux_dif2'] = (agg_df['max_flux'] - agg_df['min_flux']) / agg_df['mean_flux']
agg_df['flux_w_mean'] = agg_df['sum_flux_by_flux_ratio_sq'] / agg_df['sum_flux_ratio_sq']
agg_df['flux_dif3'] = (agg_df['max_flux'] - agg_df['min_flux']) / agg_df['flux_w_mean']
agg_df['mjd_diff'] = agg_df['max_mjd'] - agg_df['min_mjd']
agg_df.drop_column('max_mjd')
agg_df.drop_column('min_mjd')
for col in ['ra','decl','gal_l','gal_b']:
df_meta.drop_column(col)
df_meta = df_meta.merge(agg_df,on=['object_id'],how='left')
return df_meta
```
<a id="etl"></a>
## 3. ETL & Visualizations
### Load data for ETL part 1
**GPU load data**
```
%%time
start = time.time()
step = 'load data part1'
ts_cols = ['object_id', 'mjd', 'passband', 'flux', 'flux_err', 'detected']
ts_dtypes = ['int32', 'float32', 'int32', 'float32','float32','int32']
train_gd = gd.read_csv('%s/training_set.csv'%PATH,
names=ts_cols,dtype=ts_dtypes,skiprows=1)
test_gd = gd.read_csv('%s/test_set.csv'%PATH,
names=ts_cols,dtype=ts_dtypes,skiprows=1+SKIP_ROWS) # skip the header
GPU_RUN_TIME[step] = time.time() - start
```
**CPU load data**
```
%%time
start = time.time()
train = pd.read_csv('%s/training_set.csv'%PATH)
test = pd.read_csv('%s/test_set.csv'%PATH,skiprows=range(1,1+SKIP_ROWS))
CPU_RUN_TIME[step] = time.time() - start
speedup = CPU_RUN_TIME[step]/GPU_RUN_TIME[step]
line = "we achieve %.3f speedup for %s."%(speedup,step)
print(colored(line,'green'))
```
### Visualizations
```
oid = 615
mask = train.object_id== oid
scatter(train.loc[mask,'mjd'].values,
train.loc[mask,'flux'].values,
values=train.loc[mask,'passband'].values,
xlabel='time',ylabel='flux',title='object %d class 42'%oid)
```
### ETL part 1 with 100x speedup
```
%%time
# to save memory, we need to move dataframe to cpu and only keep the columns we need
test_gd = test_gd[['object_id','flux']]
train_gd = train_gd[['object_id','flux']]
%%time
# GPU
step = 'ETL part1'
start = time.time()
aggs = {'flux':['skew']}
test_gd = cudf_groupby_aggs(test_gd,group_id_col='object_id',aggs=aggs)
train_gd = cudf_groupby_aggs(train_gd,group_id_col='object_id',aggs=aggs)
GPU_RUN_TIME[step] = time.time() - start
%%time
# CPU
start = time.time()
test = test.groupby('object_id').agg(aggs)
train = train.groupby('object_id').agg(aggs)
CPU_RUN_TIME[step] = time.time() - start
speedup = CPU_RUN_TIME[step]/GPU_RUN_TIME[step]
line = "we achieve %.3f speedup for %s."%(speedup,step)
print(colored(line,'green'))
%%time
test_gd = test_gd.sort_values(by='object_id')
train_gd = train_gd.sort_values(by='object_id')
%%time
test.columns = ['skew_flux']
test = test.reset_index()
test = test.sort_values(by='object_id')
train.columns = ['skew_flux']
train = train.reset_index()
train = train.sort_values(by='object_id')
```
**Evaluation of correctness of ETL**
```
print(len(test),len(test_gd))
# RMSE: Root mean square error
def rmse(a,b):
return np.mean((a-b)**2)**0.5
print('test')
for col in test.columns:
if col in test_gd.columns:
print("%s, rmse %.6f"%(col,rmse(test[col].values,test_gd[col].to_pandas().values)))
print('train')
for col in train.columns:
if col in train_gd.columns:
print("%s, rmse %.6f"%(col,rmse(train[col].values,train_gd[col].to_pandas().values)))
# Rename the variables
test_flux_skew_gd = test_gd
test_flux_skew = test
train_flux_skew_gd = train_gd
train_flux_skew = train
print(len(test_gd),len(test))
```
### Load data for the ETL part 2 with 11x speedup
```
%%time
# read data on gpu
step = 'load data part2'
start = time.time()
ts_cols = ['object_id', 'mjd', 'passband', 'flux', 'flux_err', 'detected']
ts_dtypes = ['int32', 'float32', 'int32', 'float32','float32','int32']
test_gd = gd.read_csv('%s/test_set.csv'%PATH,
names=ts_cols,dtype=ts_dtypes,skiprows=1+SKIP_ROWS) # skip the header
train_gd = gd.read_csv('%s/training_set.csv'%PATH,
names=ts_cols,dtype=ts_dtypes,skiprows=1)
cols = ['object_id', 'ra', 'decl', 'gal_l', 'gal_b', 'ddf',
'hostgal_specz', 'hostgal_photoz', 'hostgal_photoz_err',
'distmod','mwebv', 'target']
dtypes = ['int32']+['float32']*4+['int32']+['float32']*5+['int32']
train_meta_gd = gd.read_csv('%s/training_set_metadata.csv'%PATH,
names=cols,dtype=dtypes,skiprows=1)
del cols[-1],dtypes[-1]
test_meta_gd = gd.read_csv('%s/test_set_metadata.csv'%PATH,
names=cols,dtype=dtypes,skiprows=1)
GPU_RUN_TIME[step] = time.time() - start
%%time
# read data on cpu
start = time.time()
test = pd.read_csv('%s/test_set.csv'%PATH,skiprows=range(1,1+SKIP_ROWS))
test_meta = pd.read_csv('%s/test_set_metadata.csv'%PATH)
train = pd.read_csv('%s/training_set.csv'%PATH)
train_meta = pd.read_csv('%s/training_set_metadata.csv'%PATH)
CPU_RUN_TIME[step] = time.time() - start
speedup = CPU_RUN_TIME[step]/GPU_RUN_TIME[step]
line = "we achieve %.3f speedup for %s."%(speedup,step)
print(colored(line,'green'))
```
### ETL part2 with 9x ~ 12x speedup
```
%%time
# GPU
start = time.time()
step = 'ETL part2'
train_final_gd = etl_gpu(train_gd,train_meta_gd)
train_final_gd = train_final_gd.merge(train_flux_skew_gd,on=['object_id'],how='left')
test_final_gd = etl_gpu(test_gd,test_meta_gd)
del test_gd,test_meta_gd
test_final_gd = test_final_gd.merge(test_flux_skew_gd,on=['object_id'],how='left')
GPU_RUN_TIME[step] = time.time() - start
%%time
#CPU
start = time.time()
train_final = etl_cpu(train,train_meta)
train_final = train_final.merge(train_flux_skew,on=['object_id'],how='left')
test_final = etl_cpu(test,test_meta)
test_final = test_final.merge(test_flux_skew,on=['object_id'],how='left')
CPU_RUN_TIME[step] = time.time() - start
speedup = CPU_RUN_TIME[step]/GPU_RUN_TIME[step]
line = "we achieve %.3f speedup for %s."%(speedup,step)
print(colored(line,'green'))
```
<a id="train"></a>
## 4. Model training
### train and validation with 5x speedup
```
# CPU
X = train_final.drop(['object_id','target'],axis=1).values
y = train_final['target']
Xt = test_final.drop(['object_id'],axis=1).values
assert X.shape[1] == Xt.shape[1]
classes = sorted(y.unique())
# Taken from Giba's topic : https://www.kaggle.com/titericz
# https://www.kaggle.com/c/PLAsTiCC-2018/discussion/67194
# with Kyle Boone's post https://www.kaggle.com/kyleboone
class_weights = {c: 1 for c in classes}
class_weights.update({c:2 for c in [64, 15]})
lbl = LabelEncoder()
y = lbl.fit_transform(y)
print(lbl.classes_)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.1,stratify=y, random_state=126)
cpu_params = {
'objective': 'multi:softprob',
'tree_method': 'hist',
'nthread': 16,
'num_class':14,
'max_depth': 7,
'silent':1,
'subsample':0.7,
'colsample_bytree': 0.7,}
func_loss = partial(xgb_multi_weighted_logloss,
classes=classes,
class_weights=class_weights)
%%time
start = time.time()
step = 'training'
dtrain = xgb.DMatrix(data=X_train, label=y_train)
dvalid = xgb.DMatrix(data=X_test, label=y_test)
dtest = xgb.DMatrix(data=Xt)
watchlist = [(dvalid, 'eval'), (dtrain, 'train')]
clf = xgb.train(cpu_params, dtrain=dtrain,
num_boost_round=60,evals=watchlist,
feval=func_loss,early_stopping_rounds=10,
verbose_eval=1000)
yp = clf.predict(dvalid)
cpu_loss = multi_weighted_logloss(y_test, yp, classes, class_weights)
ysub = clf.predict(dtest)
line = 'validation loss %.4f'%cpu_loss
print(colored(line,'green'))
CPU_RUN_TIME[step] = time.time() - start
# GPU
y = train_final_gd['target'].to_array()
y = lbl.fit_transform(y)
cols = [i for i in test_final_gd.columns if i not in ['object_id','target']]
for col in cols:
train_final_gd[col] = train_final_gd[col].fillna(0).astype('float32')
for col in cols:
test_final_gd[col] = test_final_gd[col].fillna(0).astype('float32')
X = train_final_gd[cols].as_matrix()
Xt = test_final_gd[cols].as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.1,stratify=y, random_state=126)
# GPU
gpu_params = cpu_params.copy()
gpu_params.update({'objective': 'multi:softprob',
'tree_method': 'gpu_hist',
})
%%time
start = time.time()
dtrain = xgb.DMatrix(data=X_train, label=y_train)
dvalid = xgb.DMatrix(data=X_test, label=y_test)
dtest = xgb.DMatrix(data=Xt)
watchlist = [(dvalid, 'eval'), (dtrain, 'train')]
clf = xgb.train(gpu_params, dtrain=dtrain,
num_boost_round=60,evals=watchlist,
feval=func_loss,early_stopping_rounds=10,
verbose_eval=1000)
yp = clf.predict(dvalid)
gpu_loss = multi_weighted_logloss(y_test, yp, classes, class_weights)
ysub = clf.predict(dtest)
line = 'validation loss %.4f'%gpu_loss
print(colored(line,'green'))
GPU_RUN_TIME[step] = time.time() - start
speedup = CPU_RUN_TIME[step]/GPU_RUN_TIME[step]
line = "we achieve %.3f speedup for %s."%(speedup,step)
print(colored(line,'green'))
```
<a id="conclusions"></a>
## 5. Conclustions
```
print("Multiclassification Loss (lower the better):")
print("CPU: %.4f GPU: %.4f"%(cpu_loss,gpu_loss))
CPU_RUN_TIME
GPU_RUN_TIME
steps = ['load data part1','ETL part1','load data part2','ETL part2','training']
GPU_RUN_TIME['Overall'] = sum([GPU_RUN_TIME[i] for i in steps])
CPU_RUN_TIME['Overall'] = sum([CPU_RUN_TIME[i] for i in steps])
steps.append('Overall')
speedup = [CPU_RUN_TIME[i]/GPU_RUN_TIME[i] for i in steps]
df = pd.DataFrame({'steps':steps, 'speedup':speedup})
df.plot.bar(x='steps', y='speedup', rot=0, figsize=(20,5), fontsize=15, title='GPU Speedup')
gpu_time = [GPU_RUN_TIME[i] for i in steps]
cpu_time = [CPU_RUN_TIME[i] for i in steps]
df = pd.DataFrame({'GPU': gpu_time,'CPU': cpu_time}, index=steps)
df.plot.bar(rot=0,figsize=(20,5), fontsize=15, title='Running time: seconds')
```
**The rapids solution achieves up to 140x speedup for ETL and 25x end-to-end speedup over the CPU solution with comparable accuracy.**
| github_jupyter |
# Jan 31 Lecture: Java JVM, numeric data type, selection (switch), and class
I determine to migrate all my materials to Jupyter notebooks. They will help when you
- miss a class
- fall asleep in a class
- can not catch up with the class
- want to run demo codes
It will save my time on
- uploading my course materials to Canvas (because I will not),
- make my slides.
## More Resource
- [Textbook](http://www.primeuniversity.edu.bd/160517/vc/eBook/download/IntroductiontoJava.pdf) and [Textbook Solutions](https://github.com/jsquared21/Intro-to-Java-Programming)(not official)
- All the source codes of the [examples](http://www.cs.armstrong.edu/liang/intro10e/examplesource.html) in our textbook.
- [My Jupyter notebooks](https://github.com/XiangHuang-LMC/ijava-binder) for the lecture.

## More about Java and our developing tools
Now you need to write your own code (your first assignment), let's review how your code will be excuted on your machine.

The first step will be accomplished by **javac** command, the second will be accompliesh by **java** command, as we demostrated last time.
- **javac** your_.java_file_name
- **java** your_.class_file_name
The process of writing your code:

Here I recommand [jGrasp](https://spider.eng.auburn.edu/user-cgi/grasp/grasp.pl?;dl=download_jgrasp.html), or you can
find it in **AJ apps** on our virtual desktop. Install the one bundled with Java, or if you use MacOS, I guess you should have Java already. If you run into any trouble installing jGrasp or Java on your local machine, please let me know.

## Numeric Data types

For Scanners you get the corresponding methods to read your numeric datatypes.

## **switch** Statements
Today we are going to learn a new type of statements: **switch** statements. A **switch** statement executes statements based on the value of a variable or an expression.



### Code demo: switch
```
int day=3;
switch (day){
case 1:
case 2:
case 3:
case 4:
case 5: System.out.println("Workday"); break;
case 0: case 6: System.out.println("Weekend");
}
```
### Code demo: [The Chinese Zodiac ](http://www.cs.armstrong.edu/liang/intro10e/html/ChineseZodiac.html)

```
import java.util.Scanner;
public class ChineseZodiac {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
System.out.print("Enter a year: ");
int year = input.nextInt();
switch (year % 12) {
case 0: System.out.println("monkey"); break;
case 1: System.out.println("rooster"); break;
case 2: System.out.println("dog"); break;
case 3: System.out.println("pig"); break;
case 4: System.out.println("rat"); break;
case 5: System.out.println("ox"); break;
case 6: System.out.println("tiger"); break;
case 7: System.out.println("rabbit"); break;
case 8: System.out.println("dragon"); break;
case 9: System.out.println("snake"); break;
case 10: System.out.println("horse"); break;
case 11: System.out.println("sheep"); break;
}
}
}
```
## Conditional Expressions
```
int x=4;
int y;
if(x>0){
y=1;
}
else{
y=-1;
}
System.out.println("y is "+ y);
```
The expression should be written as
**boolean-expression ? expression 1; expression 2**
```
x= 4;
y = (x>0)? 1 : -1;
System.out.println("y is "+ y);
```
### Excercise:
Change the following condition expression using **if-else** statements:
score = (x>10)? 3* scale : 4* scale;
```
if(x>10){
score = 3*scale;
}
else{
score = 4* scale;
}
```
Rewrite the following **if** statement using the conditional operator.
```java
if (age >= 16){
ticketPrice = 20;
}
else{
ticketPrice = 10;
}
```
```
ticketPrice = (age>=16)? 20:10;
int age =19;
if(age > 16)
{
System.out.println("Your price is 20.");
}
else{
System.out.println("Your price is 20.");
}
Scanner scan = new Scanner(System.in);
int a = scan.nextInt();
```
| github_jupyter |
### Run multiple models on different GPUs with the same data
This is very useful when you want to compare your hyper-parameters / models without set seed.
Set seed method has several drawbacks
1) Time costly <br>
2) RAM memory costly <br>
3) when encounter data augmentation, you will get in trouble... fix augmentation is very annoying. (I'll make the data pipeline to accomplish when I got time to work on this...) <br>
```
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
parser = argparse.ArgumentParser()
parser.add_argument('--gpu_id', default="1,2", type = str, help = "depends on how many GPUs on your machine and which GPU you want to get")
parser.add_argument('--frame_work', default="Keras", type = str, help = "TF / Keras")
parser.add_argument('--use_data', default="mnist", type = str, help = "mnist / cifar10")
parser.add_argument('--epoch', default= 100, type = int)
parser.add_argument('--batch_size', default= 256, type = int)
FLAGS = parser.parse_args([])
print(FLAGS)
os.environ['CUDA_VISIBLE_DEVICES'] = FLAGS.gpu_id
import threading
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from keras.datasets import cifar10, cifar100
from keras.utils.np_utils import to_categorical
import keras.backend as K
from keras.models import Model, load_model, save_model
from keras.layers import Dense, Activation, Input, Conv2D, Flatten, GlobalAveragePooling2D, MaxPooling2D
from keras.optimizers import SGD
# Modified from source: https://stackoverflow.com/questions/46712272/run-hyperparameter-optimization-on-parallel-gpus-using-tensorflow
# The original version IS NOT CORRECT, you have to add with tf.devices inside the graph
## Define a callback for Keras model
from keras.callbacks import Callback
class K_Logger(Callback):
def __init__(self, n, gpu_id = 0):
self.n = n # print loss & acc every n epochs
self.gpu_id = gpu_id
def on_epoch_end(self, epoch, logs={}):
if epoch % self.n == 0:
# add what you need here
train_loss = logs.get('loss')
train_acc = logs.get('acc')
valid_loss = logs.get('val_loss')
valid_acc = logs.get('val_acc')
print("GPU_ID: %s, epoch: %4d, loss: %0.5f, acc: %0.3f, val_loss: %0.5f, val_acc: %0.3f" \
% (self.gpu_id, epoch,
train_loss, train_acc,
valid_loss, valid_acc))
class TF_Logger():
def __init__(self, n, gpu_id = 0):
self.n = n
self.gpu_id = gpu_id
# append what you need here
self.history = {'loss': [],
'acc': [],
'val_loss': [],
'val_acc': []}
def update(self, epoch, loss, acc, val_loss, val_acc):
self.history['loss'].append(loss)
self.history['acc'].append(acc)
self.history['val_loss'].append(val_loss)
self.history['val_acc'].append(val_acc)
if epoch % self.n == 0:
print("GPU_ID: %s, epoch: %4d, loss: %0.5f, acc: %0.3f, val_loss: %0.5f, val_acc: %0.3f" \
% (self.gpu_id, epoch,
loss, acc,
val_loss, val_acc))
```
### Check GPU ID get
```
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
# list them, no matter which GPU you pick up, the order must start from 0 (e.g. CUDA_VISIBLE_DEVICES=7 ==> /device:GPU:0)
get_available_gpus()
# Get the data
use_data = FLAGS.use_data
if use_data is 'mnist':
dset = input_data.read_data_sets("data/mnist", one_hot=True, reshape=False)
train_x_all = dset.train.images
train_y_all = dset.train.labels
test_x = dset.test.images
test_y = dset.test.labels
elif use_data is 'cifar10':
dtrain, dtest = cifar10.load_data()
train_x_all, train_y_all = dtrain
test_x, test_y = dtest
train_y_all = to_categorical(train_y_all, num_classes=10)
test_y = to_categorical(test_y, num_classes=10)
print(train_x_all.shape)
print(train_y_all.shape)
print(test_x.shape)
print(test_y.shape)
def build_keras_model(input_shape, n_classes):
# Example for a simple model
x_in = Input(shape = input_shape)
x = Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same')(x_in)
x = MaxPooling2D(pool_size=(2,2))(x)
x = Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2,2))(x)
x = Flatten()(x)
y = Dense(units=n_classes, activation='softmax')(x)
model = Model(inputs=[x_in], outputs=[y])
return model
def build_tf_model(input_shape, n_classes):
imd1, imd2, imd3 = input_shape
x = tf.placeholder(tf.float32, [None, imd1, imd2, imd3], name='x')
y = tf.placeholder(tf.float32, [None, n_classes], name='y')
x = tf.layers.conv2d(inputs=x, filters=32, kernel_size=(3,3), padding='same')
x = tf.nn.relu(x)
x = tf.layers.max_pooling2d(inputs=x, pool_size=(2,2), strides=(1,1))
x = tf.layers.conv2d(inputs=x, filters=64, kernel_size=(3,3), padding='same')
x = tf.nn.relu(x)
x = tf.layers.max_pooling2d(inputs=x, pool_size=(2,2), strides=(1,1))
x = tf.layers.flatten(inputs=x)
pred = tf.layers.dense(inputs=x, units=n_classes)
pred = tf.nn.softmax(pred)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1),
name='cost')
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(pred, 1),
tf.argmax(y, 1)),
tf.float32), name = 'accuracy')
return cost
# Define the graphs per device
learning_rates = [0.01, 0.001]
jobs = []
devices = ['/device:GPU:0', '/device:GPU:1'] # depends on which GPUs you want to put them in
# Note, the optimization part was put at different place for Tensorflow and Keras version.
# For tensorflow, optimize should be placed inside tf.device (or it seems gpus not working correctly)
# For Keras, optim and compile should be placed outside tf.device (or it will throw gradient collection error)
if FLAGS.frame_work is "TF":
for device, learning_rate in zip(devices, learning_rates):
with tf.Graph().as_default() as graph:
with tf.device(device):
cost = build_tf_model(input_shape = train_x_all.shape[1:], n_classes=train_y_all.shape[1] )
optimize = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost, name='optimize')
jobs.append(graph)
elif FLAGS.frame_work is "Keras":
for learning_rate, device in zip(learning_rates, devices):
with tf.Graph().as_default() as graph:
with tf.device(device):
model = build_keras_model(input_shape=(train_x_all.shape[1:]), n_classes=train_y_all.shape[1])
# to cpu
optim = SGD(lr=learning_rate)
model.compile(loss = 'categorical_crossentropy', metrics= ['acc'], optimizer=optim)
jobs.append([graph, model])
print(jobs)
# Train a graph on a device
n_epoch = FLAGS.epoch
batch_size = FLAGS.batch_size
if FLAGS.frame_work is "TF":
def train(device, graph):
graph.history = TF_Logger(n = 5, gpu_id = device)
print("Start training on %s" % device)
with tf.Session(graph=graph) as session:
total_batch = int(train_x_all.shape[0] / batch_size)
total_val_step = int(test_x.shape[0] / batch_size)
x = graph.get_tensor_by_name('x:0')
y = graph.get_tensor_by_name('y:0')
cost_op = graph.get_tensor_by_name('cost:0')
accuracy_op = graph.get_tensor_by_name('accuracy:0')
optimize_op = graph.get_operation_by_name('optimize')
session.run(tf.global_variables_initializer())
print("Start Running")
for epoch in range(n_epoch):
epoch_loss, epoch_acc, epoch_val_loss, epoch_val_acc = [], [], [], []
for i in range(total_batch):
batch_x = train_x_all[i * batch_size:(i + 1) * batch_size]
batch_y = train_y_all[i * batch_size:(i + 1) * batch_size]
_, loss, acc = session.run([optimize_op, cost_op, accuracy_op],
feed_dict={x: batch_x, y: batch_y})
epoch_loss.append(loss)
epoch_acc.append(acc)
epoch_loss = np.mean(epoch_loss)
epoch_acc = np.mean(epoch_acc)
for i in range(total_val_step):
batch_x = test_x[i * batch_size:(i + 1) * batch_size]
batch_y = test_y[i * batch_size:(i + 1) * batch_size]
val_loss, val_accuracy = session.run([cost_op, accuracy_op],
feed_dict={x: batch_x, y: batch_y})
epoch_val_loss.append(val_loss)
epoch_val_acc.append(val_accuracy)
epoch_val_loss = np.mean(epoch_val_loss)
epoch_val_acc = np.mean(epoch_val_acc)
# Link it to logger
graph.history.update(epoch, epoch_loss, epoch_acc, epoch_val_loss, epoch_val_acc)
elif FLAGS.frame_work is "Keras":
def train(device, graph, model):
print("Start training on %s" % device)
logger = K_Logger(n = 5, gpu_id = device)
with tf.Session(graph=graph) as session:
K.set_session(session=session)
model.fit(x = train_x_all,
y = train_y_all,
batch_size=batch_size,
epochs=n_epoch,
verbose=0,
validation_data=(test_x, test_y), callbacks=[logger])
# Start threads in parallel
train_threads = []
for i, item in enumerate(jobs):
if FLAGS.frame_work is "TF":
this_graph = item
train_threads.append(threading.Thread(target=train, args=(devices[i], this_graph)))
elif FLAGS.frame_work is "Keras":
this_graph = item[0]
this_model = item[1]
train_threads.append(threading.Thread(target=train, args=(devices[i], this_graph, this_model)))
for t in train_threads:
t.start()
for t in train_threads:
t.join()
```
## Get the result and plot it
```
if FLAGS.frame_work is "TF":
history_model1 = jobs[0].history.history
history_model2 = jobs[1].history.history
elif FLAGS.frame_work is "Keras":
history_model1 = jobs[0][1].history.history
history_model2 = jobs[1][1].history.history
plt.figure(figsize=(12,8))
plt.subplot(2,2,1)
plt.plot(np.arange(len(history_model1['acc'])), history_model1['acc'], 'b-', label = 'model1')
plt.plot(np.arange(len(history_model2['acc'])), history_model2['acc'], 'r-', label = 'model2')
plt.legend()
plt.title("Training Accuracy")
plt.subplot(2,2,2)
plt.plot(np.arange(len(history_model1['val_acc'])), history_model1['val_acc'], 'b-', label = 'model1')
plt.plot(np.arange(len(history_model2['val_acc'])), history_model2['val_acc'], 'r-', label = 'model2')
plt.legend()
plt.title("Validation Accuracy")
plt.subplot(2,2,3)
plt.plot(np.arange(len(history_model1['loss'])), history_model1['loss'], 'b-', label = 'model1')
plt.plot(np.arange(len(history_model2['loss'])), history_model2['loss'], 'r-', label = 'model2')
plt.legend()
plt.title("Training Loss")
plt.subplot(2,2,4)
plt.plot(np.arange(len(history_model1['val_loss'])), history_model1['val_loss'], 'b-', label = 'model1')
plt.plot(np.arange(len(history_model2['val_loss'])), history_model2['val_loss'], 'r-', label = 'model2')
plt.legend()
plt.title("Validation Loss")
plt.show()
```
| github_jupyter |
# Azure Machine Learning Configuration
## Setup
This notebook configures the notebooks in this tutorial to connect to an Azure Machine Learning (Azure ML) Workspace. You can use an existing workspace or create a new one.
```
import azureml.core
from azureml.core import Workspace
from dotenv import set_key, get_key, find_dotenv
from pathlib import Path
from utilities import get_auth
```
## Azure ML SDK and other library installation
If you have already completed the prerequisites and selected the correct Kernel for this notebook, the AML Python SDK is already installed. Let's check the AML SDK version.
```
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
## Configure your Azure ML workspace
### Workspace parameters
To use an AML Workspace, you will need the following information:
* Your subscription id
* A resource group name
* The region that will host your workspace
* A name for your workspace
Replace the values in the cell below with your information.
```
subscription_id = "1db0a5ce-7de1-4082-8e25-3c5a4e5a9a98"
resource_group = "ProjektAzure"
workspace_name = "ProjektAzure"
workspace_region = "East US"
env_path = find_dotenv()
if env_path == "":
Path(".env").touch()
env_path = find_dotenv()
set_key(env_path, "subscription_id", subscription_id)
set_key(env_path, "resource_group", resource_group)
set_key(env_path, "workspace_name", workspace_name)
set_key(env_path, "workspace_region", workspace_region)
```
### Create the workspace
**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.
This will fail if:
* You do not have permission to create a workspace in the resource group
* You do not have permission to create a resource group if it's non-existing.
* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription
If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.
```
# Create the workspace using the specified parameters
ws = Workspace.create(
name=workspace_name,
subscription_id=subscription_id,
resource_group=resource_group,
location=workspace_region,
create_resource_group=True,
auth=get_auth(env_path),
exist_ok=True,
)
# write the details of the workspace to a configuration file
ws.write_config()
```
Below we will reload the workspace just to make sure that everything is working.
```
# load workspace configuration
ws = Workspace.from_config(auth=get_auth(env_path))
ws.get_details()
```
You can now move on to the next notebook to [prepare the training script for Mask R-CNN model](01_PrepareTrainingScript.ipynb).
| github_jupyter |
# Isolation Forest outlier detection on KDD Cup '99 dataset
## Method
[Isolation forests](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf) (IF) are tree based models specifically used for outlier detection. The IF isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. The number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of random trees, is a measure of normality and is used to define an anomaly score. Outliers can typically be isolated quicker, leading to shorter paths.
## Dataset
The outlier detector needs to detect computer network intrusions using TCP dump data for a local-area network (LAN) simulating a typical U.S. Air Force LAN. A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack.
There are 4 types of attacks in the dataset:
- DOS: denial-of-service, e.g. syn flood;
- R2L: unauthorized access from a remote machine, e.g. guessing password;
- U2R: unauthorized access to local superuser (root) privileges;
- probing: surveillance and other probing, e.g., port scanning.
The dataset contains about 5 million connection records.
There are 3 types of features:
- basic features of individual connections, e.g. duration of connection
- content features within a connection, e.g. number of failed log in attempts
- traffic features within a 2 second window, e.g. number of connections to the same host as the current connection
This notebook requires the `seaborn` package for visualization which can be installed via `pip`:
```
!pip install seaborn
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix, f1_score
from alibi_detect.od import IForest
from alibi_detect.datasets import fetch_kdd
from alibi_detect.utils.data import create_outlier_batch
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_roc
```
## Load dataset
We only keep a number of continuous (18 out of 41) features.
```
kddcup = fetch_kdd(percent10=True) # only load 10% of the dataset
print(kddcup.data.shape, kddcup.target.shape)
```
Assume that a model is trained on *normal* instances of the dataset (not outliers) and standardization is applied:
```
np.random.seed(0)
normal_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=400000, perc_outlier=0)
X_train, y_train = normal_batch.data.astype('float'), normal_batch.target
print(X_train.shape, y_train.shape)
print('{}% outliers'.format(100 * y_train.mean()))
mean, stdev = X_train.mean(axis=0), X_train.std(axis=0)
```
Apply standardization:
```
X_train = (X_train - mean) / stdev
```
## Load or define outlier detector
The pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can use the built-in ```fetch_detector``` function which saves the pre-trained models in a local directory ```filepath``` and loads the detector. Alternatively, you can train a detector from scratch:
```
load_outlier_detector = True
filepath = 'my_path' # change to directory where model is downloaded
detector_type = 'outlier'
dataset = 'kddcup'
detector_name = 'IForest'
filepath = os.path.join(filepath, detector_name)
if load_outlier_detector: # load pretrained outlier detector
od = fetch_detector(filepath, detector_type, dataset, detector_name)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = IForest(threshold=None, # threshold for outlier score
n_estimators=100)
# train
od.fit(X_train)
# save the trained outlier detector
save_detector(od, filepath)
```
The warning tells us we still need to set the outlier threshold. This can be done with the `infer_threshold` method. We need to pass a batch of instances and specify what percentage of those we consider to be normal via `threshold_perc`. Let's assume we have some data which we know contains around 5% outliers. The percentage of outliers can be set with `perc_outlier` in the `create_outlier_batch` function.
```
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target
X_threshold = (X_threshold - mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
od.infer_threshold(X_threshold, threshold_perc=100-perc_outlier)
print('New threshold: {}'.format(od.threshold))
```
Let's save the outlier detector with updated threshold:
```
save_detector(od, filepath)
```
## Detect outliers
We now generate a batch of data with 10% outliers and detect the outliers in the batch.
```
np.random.seed(1)
outlier_batch = create_outlier_batch(kddcup.data, kddcup.target, n_samples=1000, perc_outlier=10)
X_outlier, y_outlier = outlier_batch.data.astype('float'), outlier_batch.target
X_outlier = (X_outlier - mean) / stdev
print(X_outlier.shape, y_outlier.shape)
print('{}% outliers'.format(100 * y_outlier.mean()))
```
Predict outliers:
```
od_preds = od.predict(X_outlier, return_instance_score=True)
```
## Display results
F1 score and confusion matrix:
```
labels = outlier_batch.target_names
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
print('F1 score: {:.4f}'.format(f1))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
```
Plot instance level outlier scores vs. the outlier threshold:
```
plot_instance_score(od_preds, y_outlier, labels, od.threshold)
```
We can see that the isolation forest does not do a good job at detecting 1 type of outliers with an outlier score around 0. This makes inferring a good threshold without explicit knowledge about the outliers hard. Setting the threshold just below 0 would lead to significantly better detector performance for the outliers in the dataset. This is also reflected by the ROC curve:
```
roc_data = {'IF': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AI4Finance-Foundation/FinRL/blob/master/FinRL_PortfolioAllocation_NeurIPS_2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Deep Reinforcement Learning for Stock Trading from Scratch: Portfolio Allocation
Tutorials to use OpenAI DRL to perform portfolio allocation in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop
* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.
* Check out medium blog for detailed explanations:
* Please report any issues to our Github: https://github.com/AI4Finance-Foundation/FinRL/issues
* **Pytorch Version**
# Content
* [1. Problem Definition](#0)
* [2. Getting Started - Load Python packages](#1)
* [2.1. Install Packages](#1.1)
* [2.2. Check Additional Packages](#1.2)
* [2.3. Import Packages](#1.3)
* [2.4. Create Folders](#1.4)
* [3. Download Data](#2)
* [4. Preprocess Data](#3)
* [4.1. Technical Indicators](#3.1)
* [4.2. Perform Feature Engineering](#3.2)
* [5.Build Environment](#4)
* [5.1. Training & Trade Data Split](#4.1)
* [5.2. User-defined Environment](#4.2)
* [5.3. Initialize Environment](#4.3)
* [6.Implement DRL Algorithms](#5)
* [7.Backtesting Performance](#6)
* [7.1. BackTestStats](#6.1)
* [7.2. BackTestPlot](#6.2)
* [7.3. Baseline Stats](#6.3)
* [7.3. Compare to Stock Market Index](#6.4)
<a id='0'></a>
# Part 1. Problem Definition
This problem is to design an automated trading solution for portfolio alloacation. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.
The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:
* Action: The action space describes the allowed actions that the agent interacts with the
environment. Normally, a ∈ A represents the weight of a stock in the porfolio: a ∈ (-1,1). Assume our stock pool includes N stocks, we can use a list [a<sub>1</sub>, a<sub>2</sub>, ... , a<sub>N</sub>] to determine the weight for each stock in the porfotlio, where a<sub>i</sub> ∈ (-1,1), a<sub>1</sub>+ a<sub>2</sub>+...+a<sub>N</sub>=1. For example, "The weight of AAPL in the portfolio is 10%." is [0.1 , ...].
* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio
values at state s′ and s, respectively
* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so
our trading agent observes many different features to better learn in an interactive environment.
* Environment: Dow 30 consituents
The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.
<a id='1'></a>
# Part 2. Getting Started- Load Python Packages
<a id='1.1'></a>
## 2.1. Install all the packages through FinRL library
```
## install finrl library
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
```
<a id='1.2'></a>
## 2.2. Check if the additional packages needed are present, if not install them.
* Yahoo Finance API
* pandas
* numpy
* matplotlib
* stockstats
* OpenAI gym
* stable-baselines
* tensorflow
* pyfolio
<a id='1.3'></a>
## 2.3. Import Packages
```
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
%matplotlib inline
import datetime
from finrl.apps import config
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline,convert_daily_return_to_pyfolio_ts
from finrl.finrl_meta.data_processor import DataProcessor
from finrl.finrl_meta.data_processors.processor_yahoofinance import YahooFinanceProcessor
import sys
sys.path.append("../FinRL-Library")
```
<a id='1.4'></a>
## 2.4. Create Folders
```
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
```
<a id='2'></a>
# Part 3. Download Data
Yahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.
* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API
* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
```
print(config.DOW_30_TICKER)
dp = YahooFinanceProcessor()
df = dp.download_data(start_date = '2008-01-01',
end_date = '2021-10-31',
ticker_list = config.DOW_30_TICKER, time_interval='1D')
df.head()
df.shape
```
# Part 4: Preprocess Data
Data preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.
* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.
* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
```
fe = FeatureEngineer(
use_technical_indicator=True,
use_turbulence=False,
user_defined_feature = False)
df = fe.preprocess_data(df)
df.shape
df.head()
```
## Add covariance matrix as states
```
# add covariance matrix as states
df=df.sort_values(['date','tic'],ignore_index=True)
df.index = df.date.factorize()[0]
cov_list = []
return_list = []
# look back is one year
lookback=252
for i in range(lookback,len(df.index.unique())):
data_lookback = df.loc[i-lookback:i,:]
price_lookback=data_lookback.pivot_table(index = 'date',columns = 'tic', values = 'close')
return_lookback = price_lookback.pct_change().dropna()
return_list.append(return_lookback)
covs = return_lookback.cov().values
cov_list.append(covs)
df_cov = pd.DataFrame({'date':df.date.unique()[lookback:],'cov_list':cov_list,'return_list':return_list})
df = df.merge(df_cov, on='date')
df = df.sort_values(['date','tic']).reset_index(drop=True)
df.shape
df.head()
```
<a id='4'></a>
# Part 5. Design Environment
Considering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.
Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.
## Training data split: 2009-01-01 to 2020-07-01
```
train = data_split(df, '2009-01-01','2020-07-01')
#trade = data_split(df, '2020-01-01', config.END_DATE)
train.head()
```
## Environment for Portfolio Allocation
```
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from stable_baselines3.common.vec_env import DummyVecEnv
class StockPortfolioEnv(gym.Env):
"""A single stock trading environment for OpenAI gym
Attributes
----------
df: DataFrame
input data
stock_dim : int
number of unique stocks
hmax : int
maximum number of shares to trade
initial_amount : int
start money
transaction_cost_pct: float
transaction cost percentage per trade
reward_scaling: float
scaling factor for reward, good for training
state_space: int
the dimension of input features
action_space: int
equals stock dimension
tech_indicator_list: list
a list of technical indicator names
turbulence_threshold: int
a threshold to control risk aversion
day: int
an increment number to control date
Methods
-------
_sell_stock()
perform sell action based on the sign of the action
_buy_stock()
perform buy action based on the sign of the action
step()
at each step the agent will return actions, then
we will calculate the reward, and return the next observation.
reset()
reset the environment
render()
use render to return other functions
save_asset_memory()
return account value at each time step
save_action_memory()
return actions/positions at each time step
"""
metadata = {'render.modes': ['human']}
def __init__(self,
df,
stock_dim,
hmax,
initial_amount,
transaction_cost_pct,
reward_scaling,
state_space,
action_space,
tech_indicator_list,
turbulence_threshold=None,
lookback=252,
day = 0):
#super(StockEnv, self).__init__()
#money = 10 , scope = 1
self.day = day
self.lookback=lookback
self.df = df
self.stock_dim = stock_dim
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.state_space = state_space
self.action_space = action_space
self.tech_indicator_list = tech_indicator_list
# action_space normalization and shape is self.stock_dim
self.action_space = spaces.Box(low = 0, high = 1,shape = (self.action_space,))
# Shape = (34, 30)
# covariance matrix + technical indicators
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space+len(self.tech_indicator_list),self.state_space))
# load data from a pandas dataframe
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.terminal = False
self.turbulence_threshold = turbulence_threshold
# initalize state: inital portfolio return + individual stock return + individual weights
self.portfolio_value = self.initial_amount
# memorize portfolio value each step
self.asset_memory = [self.initial_amount]
# memorize portfolio return each step
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
def step(self, actions):
# print(self.day)
self.terminal = self.day >= len(self.df.index.unique())-1
# print(actions)
if self.terminal:
df = pd.DataFrame(self.portfolio_return_memory)
df.columns = ['daily_return']
plt.plot(df.daily_return.cumsum(),'r')
plt.savefig('results/cumulative_reward.png')
plt.close()
plt.plot(self.portfolio_return_memory,'r')
plt.savefig('results/rewards.png')
plt.close()
print("=================================")
print("begin_total_asset:{}".format(self.asset_memory[0]))
print("end_total_asset:{}".format(self.portfolio_value))
df_daily_return = pd.DataFrame(self.portfolio_return_memory)
df_daily_return.columns = ['daily_return']
if df_daily_return['daily_return'].std() !=0:
sharpe = (252**0.5)*df_daily_return['daily_return'].mean()/ \
df_daily_return['daily_return'].std()
print("Sharpe: ",sharpe)
print("=================================")
return self.state, self.reward, self.terminal,{}
else:
#print("Model actions: ",actions)
# actions are the portfolio weight
# normalize to sum of 1
#if (np.array(actions) - np.array(actions).min()).sum() != 0:
# norm_actions = (np.array(actions) - np.array(actions).min()) / (np.array(actions) - np.array(actions).min()).sum()
#else:
# norm_actions = actions
weights = self.softmax_normalization(actions)
#print("Normalized actions: ", weights)
self.actions_memory.append(weights)
last_day_memory = self.data
#load next state
self.day += 1
self.data = self.df.loc[self.day,:]
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
#print(self.state)
# calcualte portfolio return
# individual stocks' return * weight
portfolio_return = sum(((self.data.close.values / last_day_memory.close.values)-1)*weights)
# update portfolio value
new_portfolio_value = self.portfolio_value*(1+portfolio_return)
self.portfolio_value = new_portfolio_value
# save into memory
self.portfolio_return_memory.append(portfolio_return)
self.date_memory.append(self.data.date.unique()[0])
self.asset_memory.append(new_portfolio_value)
# the reward is the new portfolio value or end portfolo value
self.reward = new_portfolio_value
#print("Step reward: ", self.reward)
#self.reward = self.reward*self.reward_scaling
return self.state, self.reward, self.terminal, {}
def reset(self):
self.asset_memory = [self.initial_amount]
self.day = 0
self.data = self.df.loc[self.day,:]
# load states
self.covs = self.data['cov_list'].values[0]
self.state = np.append(np.array(self.covs), [self.data[tech].values.tolist() for tech in self.tech_indicator_list ], axis=0)
self.portfolio_value = self.initial_amount
#self.cost = 0
#self.trades = 0
self.terminal = False
self.portfolio_return_memory = [0]
self.actions_memory=[[1/self.stock_dim]*self.stock_dim]
self.date_memory=[self.data.date.unique()[0]]
return self.state
def render(self, mode='human'):
return self.state
def softmax_normalization(self, actions):
numerator = np.exp(actions)
denominator = np.sum(np.exp(actions))
softmax_output = numerator/denominator
return softmax_output
def save_asset_memory(self):
date_list = self.date_memory
portfolio_return = self.portfolio_return_memory
#print(len(date_list))
#print(len(asset_list))
df_account_value = pd.DataFrame({'date':date_list,'daily_return':portfolio_return})
return df_account_value
def save_action_memory(self):
# date and close price length must match actions length
date_list = self.date_memory
df_date = pd.DataFrame(date_list)
df_date.columns = ['date']
action_list = self.actions_memory
df_actions = pd.DataFrame(action_list)
df_actions.columns = self.data.tic.values
df_actions.index = df_date.date
#df_actions = pd.DataFrame({'date':date_list,'actions':action_list})
return df_actions
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
obs = e.reset()
return e, obs
stock_dimension = len(train.tic.unique())
state_space = stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"transaction_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockPortfolioEnv(df = train, **env_kwargs)
env_train, _ = e_train_gym.get_sb_env()
print(type(env_train))
```
<a id='5'></a>
# Part 6: Implement DRL Algorithms
* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.
* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,
Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to
design their own DRL algorithms by adapting these DRL algorithms.
```
# initialize
agent = DRLAgent(env = env_train)
```
### Model 1: **A2C**
```
agent = DRLAgent(env = env_train)
A2C_PARAMS = {"n_steps": 5, "ent_coef": 0.005, "learning_rate": 0.0002}
model_a2c = agent.get_model(model_name="a2c",model_kwargs = A2C_PARAMS)
trained_a2c = agent.train_model(model=model_a2c,
tb_log_name='a2c',
total_timesteps=50000)
trained_a2c.save('/content/trained_models/trained_a2c.zip')
```
### Model 2: **PPO**
```
agent = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.005,
"learning_rate": 0.0001,
"batch_size": 128,
}
model_ppo = agent.get_model("ppo",model_kwargs = PPO_PARAMS)
trained_ppo = agent.train_model(model=model_ppo,
tb_log_name='ppo',
total_timesteps=80000)
trained_ppo.save('/content/trained_models/trained_ppo.zip')
```
### Model 3: **DDPG**
```
agent = DRLAgent(env = env_train)
DDPG_PARAMS = {"batch_size": 128, "buffer_size": 50000, "learning_rate": 0.001}
model_ddpg = agent.get_model("ddpg",model_kwargs = DDPG_PARAMS)
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name='ddpg',
total_timesteps=50000)
trained_ddpg.save('/content/trained_models/trained_ddpg.zip')
```
### Model 4: **SAC**
```
agent = DRLAgent(env = env_train)
SAC_PARAMS = {
"batch_size": 128,
"buffer_size": 100000,
"learning_rate": 0.0003,
"learning_starts": 100,
"ent_coef": "auto_0.1",
}
model_sac = agent.get_model("sac",model_kwargs = SAC_PARAMS)
trained_sac = agent.train_model(model=model_sac,
tb_log_name='sac',
total_timesteps=50000)
trained_sac.save('/content/trained_models/trained_sac.zip')
```
### Model 5: **TD3**
```
agent = DRLAgent(env = env_train)
TD3_PARAMS = {"batch_size": 100,
"buffer_size": 1000000,
"learning_rate": 0.001}
model_td3 = agent.get_model("td3",model_kwargs = TD3_PARAMS)
trained_td3 = agent.train_model(model=model_td3,
tb_log_name='td3',
total_timesteps=30000)
trained_td3.save('/content/trained_models/trained_td3.zip')
```
## Trading
Assume that we have $1,000,000 initial capital at 2019-01-01. We use the DDPG model to trade Dow jones 30 stocks.
```
trade = data_split(df,'2020-07-01', '2021-10-31')
e_trade_gym = StockPortfolioEnv(df = trade, **env_kwargs)
trade.shape
df_daily_return, df_actions = DRLAgent.DRL_prediction(model=trained_a2c,
environment = e_trade_gym)
df_daily_return.head()
df_daily_return.to_csv('df_daily_return.csv')
df_actions.head()
df_actions.to_csv('df_actions.csv')
```
<a id='6'></a>
# Part 7: Backtest Our Strategy
Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.
<a id='6.1'></a>
## 7.1 BackTestStats
pass in df_account_value, this information is stored in env class
```
from pyfolio import timeseries
DRL_strat = convert_daily_return_to_pyfolio_ts(df_daily_return)
perf_func = timeseries.perf_stats
perf_stats_all = perf_func( returns=DRL_strat,
factor_returns=DRL_strat,
positions=None, transactions=None, turnover_denom="AGB")
print("==============DRL Strategy Stats===========")
perf_stats_all
#baseline stats
print("==============Get Baseline Stats===========")
baseline_df = get_baseline(
ticker="^DJI",
start = df_daily_return.loc[0,'date'],
end = df_daily_return.loc[len(df_daily_return)-1,'date'])
stats = backtest_stats(baseline_df, value_col_name = 'close')
```
<a id='6.2'></a>
## 7.2 BackTestPlot
```
import pyfolio
%matplotlib inline
baseline_df = get_baseline(
ticker='^DJI', start=df_daily_return.loc[0,'date'], end='2021-07-01'
)
baseline_returns = get_daily_return(baseline_df, value_col_name="close")
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = DRL_strat,
benchmark_rets=baseline_returns, set_context=False)
```
## Min-Variance Portfolio Allocation
```
!pip install PyPortfolioOpt
from pypfopt.efficient_frontier import EfficientFrontier
from pypfopt import risk_models
unique_tic = trade.tic.unique()
unique_trade_date = trade.date.unique()
df.head()
#calculate_portfolio_minimum_variance
portfolio = pd.DataFrame(index = range(1), columns = unique_trade_date)
initial_capital = 1000000
portfolio.loc[0,unique_trade_date[0]] = initial_capital
for i in range(len( unique_trade_date)-1):
df_temp = df[df.date==unique_trade_date[i]].reset_index(drop=True)
df_temp_next = df[df.date==unique_trade_date[i+1]].reset_index(drop=True)
#Sigma = risk_models.sample_cov(df_temp.return_list[0])
#calculate covariance matrix
Sigma = df_temp.return_list[0].cov()
#portfolio allocation
ef_min_var = EfficientFrontier(None, Sigma,weight_bounds=(0, 0.1))
#minimum variance
raw_weights_min_var = ef_min_var.min_volatility()
#get weights
cleaned_weights_min_var = ef_min_var.clean_weights()
#current capital
cap = portfolio.iloc[0, i]
#current cash invested for each stock
current_cash = [element * cap for element in list(cleaned_weights_min_var.values())]
# current held shares
current_shares = list(np.array(current_cash)
/ np.array(df_temp.close))
# next time period price
next_price = np.array(df_temp_next.close)
##next_price * current share to calculate next total account value
portfolio.iloc[0, i+1] = np.dot(current_shares, next_price)
portfolio=portfolio.T
portfolio.columns = ['account_value']
portfolio.head()
a2c_cumpod =(df_daily_return.daily_return+1).cumprod()-1
min_var_cumpod =(portfolio.account_value.pct_change()+1).cumprod()-1
dji_cumpod =(baseline_returns+1).cumprod()-1
```
## Plotly: DRL, Min-Variance, DJIA
```
from datetime import datetime as dt
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
time_ind = pd.Series(df_daily_return.date)
trace0_portfolio = go.Scatter(x = time_ind, y = a2c_cumpod, mode = 'lines', name = 'A2C (Portfolio Allocation)')
trace1_portfolio = go.Scatter(x = time_ind, y = dji_cumpod, mode = 'lines', name = 'DJIA')
trace2_portfolio = go.Scatter(x = time_ind, y = min_var_cumpod, mode = 'lines', name = 'Min-Variance')
#trace3_portfolio = go.Scatter(x = time_ind, y = ddpg_cumpod, mode = 'lines', name = 'DDPG')
#trace4_portfolio = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace5_portfolio = go.Scatter(x = time_ind, y = min_cumpod, mode = 'lines', name = 'Min-Variance')
#trace4 = go.Scatter(x = time_ind, y = addpg_cumpod, mode = 'lines', name = 'Adaptive-DDPG')
#trace2 = go.Scatter(x = time_ind, y = portfolio_cost_minv, mode = 'lines', name = 'Min-Variance')
#trace3 = go.Scatter(x = time_ind, y = spx_value, mode = 'lines', name = 'SPX')
fig = go.Figure()
fig.add_trace(trace0_portfolio)
fig.add_trace(trace1_portfolio)
fig.add_trace(trace2_portfolio)
fig.update_layout(
legend=dict(
x=0,
y=1,
traceorder="normal",
font=dict(
family="sans-serif",
size=15,
color="black"
),
bgcolor="White",
bordercolor="white",
borderwidth=2
),
)
#fig.update_layout(legend_orientation="h")
fig.update_layout(title={
#'text': "Cumulative Return using FinRL",
'y':0.85,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
#with Transaction cost
#fig.update_layout(title = 'Quarterly Trade Date')
fig.update_layout(
# margin=dict(l=20, r=20, t=20, b=20),
paper_bgcolor='rgba(1,1,0,0)',
plot_bgcolor='rgba(1, 1, 0, 0)',
#xaxis_title="Date",
yaxis_title="Cumulative Return",
xaxis={'type': 'date',
'tick0': time_ind[0],
'tickmode': 'linear',
'dtick': 86400000.0 *80}
)
fig.update_xaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(showline=True,linecolor='black',showgrid=True, gridwidth=1, gridcolor='LightSteelBlue',mirror=True)
fig.update_yaxes(zeroline=True, zerolinewidth=1, zerolinecolor='LightSteelBlue')
fig.show()
```
| github_jupyter |
**Chapter 18 – Reinforcement Learning**
_This notebook contains all the sample code in chapter 18_.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/18_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!apt update && apt install -y libpq-dev libsdl2-dev swig xorg-dev xvfb
!pip install -q -U tf-agents pyvirtualdisplay gym[atari]
IS_COLAB = True
except Exception:
IS_COLAB = False
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
if not tf.config.list_physical_devices('GPU'):
print("No GPU was detected. CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
tf.random.set_seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# To get smooth animations
import matplotlib.animation as animation
mpl.rc('animation', html='jshtml')
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rl"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Introduction to OpenAI gym
In this notebook we will be using [OpenAI gym](https://gym.openai.com/), a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning *agents* to interact with. Let's start by importing `gym`:
```
import gym
```
Let's list all the available environments:
```
gym.envs.registry.all()
```
The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.
```
env = gym.make('CartPole-v1')
```
Let's initialize the environment by calling is `reset()` method. This returns an observation:
```
env.seed(42)
obs = env.reset()
```
Observations vary depending on the environment. In this case it is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity.
```
obs
```
An environment can be visualized by calling its `render()` method, and you can pick the rendering mode (the rendering options depend on the environment).
**Warning**: some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify `mode="rgb_array"`. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like [Xvfb](http://en.wikipedia.org/wiki/Xvfb). On Debian or Ubuntu:
```bash
$ apt update
$ apt install -y xvfb
```
You can then start Jupyter using the `xvfb-run` command:
```bash
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
```
Alternatively, you can install the [pyvirtualdisplay](https://github.com/ponty/pyvirtualdisplay) Python library which wraps Xvfb:
```bash
python3 -m pip install -U pyvirtualdisplay
```
And run the following code:
```
try:
import pyvirtualdisplay
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
except ImportError:
pass
env.render()
```
In this example we will set `mode="rgb_array"` to get an image of the environment as a NumPy array:
```
img = env.render(mode="rgb_array")
img.shape
def plot_environment(env, figsize=(5,4)):
plt.figure(figsize=figsize)
img = env.render(mode="rgb_array")
plt.imshow(img)
plt.axis("off")
return img
plot_environment(env)
plt.show()
```
Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:
```
env.action_space
```
Yep, just two possible actions: accelerate towards the left or towards the right.
Since the pole is leaning toward the right (`obs[2] > 0`), let's accelerate the cart toward the right:
```
action = 1 # accelerate right
obs, reward, done, info = env.step(action)
obs
```
Notice that the cart is now moving toward the right (`obs[1] > 0`). The pole is still tilted toward the right (`obs[2] > 0`), but its angular velocity is now negative (`obs[3] < 0`), so it will likely be tilted toward the left after the next step.
```
plot_environment(env)
save_fig("cart_pole_plot")
```
Looks like it's doing what we're telling it to do!
The environment also tells the agent how much reward it got during the last step:
```
reward
```
When the game is over, the environment returns `done=True`:
```
done
```
Finally, `info` is an environment-specific dictionary that can provide some extra information that you may find useful for debugging or for training. For example, in some games it may indicate how many lives the agent has.
```
info
```
The sequence of steps between the moment the environment is reset until it is done is called an "episode". At the end of an episode (i.e., when `step()` returns `done=True`), you should reset the environment before you continue to use it.
```
if done:
obs = env.reset()
```
Now how can we make the poll remain upright? We will need to define a _policy_ for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do.
# A simple hard-coded policy
Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and _vice versa_. Let's see if that works:
```
env.seed(42)
def basic_policy(obs):
angle = obs[2]
return 0 if angle < 0 else 1
totals = []
for episode in range(500):
episode_rewards = 0
obs = env.reset()
for step in range(200):
action = basic_policy(obs)
obs, reward, done, info = env.step(action)
episode_rewards += reward
if done:
break
totals.append(episode_rewards)
np.mean(totals), np.std(totals), np.min(totals), np.max(totals)
```
Well, as expected, this strategy is a bit too basic: the best it did was to keep the poll up for only 68 steps. This environment is considered solved when the agent keeps the poll up for 200 steps.
Let's visualize one episode:
```
env.seed(42)
frames = []
obs = env.reset()
for step in range(200):
img = env.render(mode="rgb_array")
frames.append(img)
action = basic_policy(obs)
obs, reward, done, info = env.step(action)
if done:
break
```
Now show the animation:
```
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,
def plot_animation(frames, repeat=False, interval=40):
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
anim = animation.FuncAnimation(
fig, update_scene, fargs=(frames, patch),
frames=len(frames), repeat=repeat, interval=interval)
plt.close()
return anim
plot_animation(frames)
```
Clearly the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that!
# Neural Network Policies
Let's create a neural network that will take observations as inputs, and output the probabilities of actions to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`.
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
n_inputs = 4 # == env.observation_space.shape[0]
model = keras.models.Sequential([
keras.layers.Dense(5, activation="elu", input_shape=[n_inputs]),
keras.layers.Dense(1, activation="sigmoid"),
])
```
In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state.
You may wonder why we plan to pick a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between _exploring_ new actions and _exploiting_ the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried.
Let's write a small function that will run the model to play one episode, and return the frames so we can display an animation:
```
def render_policy_net(model, n_max_steps=200, seed=42):
frames = []
env = gym.make("CartPole-v1")
env.seed(seed)
np.random.seed(seed)
obs = env.reset()
for step in range(n_max_steps):
frames.append(env.render(mode="rgb_array"))
left_proba = model.predict(obs.reshape(1, -1))
action = int(np.random.rand() > left_proba)
obs, reward, done, info = env.step(action)
if done:
break
env.close()
return frames
```
Now let's look at how well this randomly initialized policy network performs:
```
frames = render_policy_net(model)
plot_animation(frames)
```
Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right.
We can make the same net play in 50 different environments in parallel (this will give us a diverse training batch at each step), and train for 5000 iterations. We also reset environments when they are done. We train the model using a custom training loop so we can easily use the predictions at each training step to advance the environments.
```
n_environments = 50
n_iterations = 5000
envs = [gym.make("CartPole-v1") for _ in range(n_environments)]
for index, env in enumerate(envs):
env.seed(index)
np.random.seed(42)
observations = [env.reset() for env in envs]
optimizer = keras.optimizers.RMSprop()
loss_fn = keras.losses.binary_crossentropy
for iteration in range(n_iterations):
# if angle < 0, we want proba(left) = 1., or else proba(left) = 0.
target_probas = np.array([([1.] if obs[2] < 0 else [0.])
for obs in observations])
with tf.GradientTape() as tape:
left_probas = model(np.array(observations))
loss = tf.reduce_mean(loss_fn(target_probas, left_probas))
print("\rIteration: {}, Loss: {:.3f}".format(iteration, loss.numpy()), end="")
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
actions = (np.random.rand(n_environments, 1) > left_probas.numpy()).astype(np.int32)
for env_index, env in enumerate(envs):
obs, reward, done, info = env.step(actions[env_index][0])
observations[env_index] = obs if not done else env.reset()
for env in envs:
env.close()
frames = render_policy_net(model)
plot_animation(frames)
```
Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own. One that does not wobble as much.
# Policy Gradients
To train this neural network we will need to define the target probabilities `y`. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in an episode, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the _credit assignment problem_.
The _Policy Gradients_ algorithm tackles this problem by first playing multiple episodes, then making the actions in good episodes slightly more likely, while actions in bad episodes are made slightly less likely. First we play, then we go back and think about what we did.
Let's start by creating a function to play a single step using the model. We will also pretend for now that whatever action it takes is the right one, so we can compute the loss and its gradients (we will just save these gradients for now, and modify them later depending on how good or bad the action turned out to be):
```
def play_one_step(env, obs, model, loss_fn):
with tf.GradientTape() as tape:
left_proba = model(obs[np.newaxis])
action = (tf.random.uniform([1, 1]) > left_proba)
y_target = tf.constant([[1.]]) - tf.cast(action, tf.float32)
loss = tf.reduce_mean(loss_fn(y_target, left_proba))
grads = tape.gradient(loss, model.trainable_variables)
obs, reward, done, info = env.step(int(action[0, 0].numpy()))
return obs, reward, done, grads
```
If `left_proba` is high, then `action` will most likely be `False` (since a random number uniformally sampled between 0 and 1 will probably not be greater than `left_proba`). And `False` means 0 when you cast it to a number, so `y_target` would be equal to 1 - 0 = 1. In other words, we set the target to 1, meaning we pretend that the probability of going left should have been 100% (so we took the right action).
Now let's create another function that will rely on the `play_one_step()` function to play multiple episodes, returning all the rewards and gradients, for each episode and each step:
```
def play_multiple_episodes(env, n_episodes, n_max_steps, model, loss_fn):
all_rewards = []
all_grads = []
for episode in range(n_episodes):
current_rewards = []
current_grads = []
obs = env.reset()
for step in range(n_max_steps):
obs, reward, done, grads = play_one_step(env, obs, model, loss_fn)
current_rewards.append(reward)
current_grads.append(grads)
if done:
break
all_rewards.append(current_rewards)
all_grads.append(current_grads)
return all_rewards, all_grads
```
The Policy Gradients algorithm uses the model to play the episode several times (e.g., 10 times), then it goes back and looks at all the rewards, discounts them and normalizes them. So let's create couple functions for that: the first will compute discounted rewards; the second will normalize the discounted rewards across many episodes.
```
def discount_rewards(rewards, discount_rate):
discounted = np.array(rewards)
for step in range(len(rewards) - 2, -1, -1):
discounted[step] += discounted[step + 1] * discount_rate
return discounted
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [discount_rewards(rewards, discount_rate)
for rewards in all_rewards]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean) / reward_std
for discounted_rewards in all_discounted_rewards]
```
Say there were 3 actions, and after each action there was a reward: first 10, then 0, then -50. If we use a discount factor of 80%, then the 3rd action will get -50 (full credit for the last reward), but the 2nd action will only get -40 (80% credit for the last reward), and the 1st action will get 80% of -40 (-32) plus full credit for the first reward (+10), which leads to a discounted reward of -22:
```
discount_rewards([10, 0, -50], discount_rate=0.8)
```
To normalize all discounted rewards across all episodes, we compute the mean and standard deviation of all the discounted rewards, and we subtract the mean from each discounted reward, and divide by the standard deviation:
```
discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8)
n_iterations = 150
n_episodes_per_update = 10
n_max_steps = 200
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=0.01)
loss_fn = keras.losses.binary_crossentropy
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(5, activation="elu", input_shape=[4]),
keras.layers.Dense(1, activation="sigmoid"),
])
env = gym.make("CartPole-v1")
env.seed(42);
for iteration in range(n_iterations):
all_rewards, all_grads = play_multiple_episodes(
env, n_episodes_per_update, n_max_steps, model, loss_fn)
total_rewards = sum(map(sum, all_rewards)) # Not shown in the book
print("\rIteration: {}, mean rewards: {:.1f}".format( # Not shown
iteration, total_rewards / n_episodes_per_update), end="") # Not shown
all_final_rewards = discount_and_normalize_rewards(all_rewards,
discount_rate)
all_mean_grads = []
for var_index in range(len(model.trainable_variables)):
mean_grads = tf.reduce_mean(
[final_reward * all_grads[episode_index][step][var_index]
for episode_index, final_rewards in enumerate(all_final_rewards)
for step, final_reward in enumerate(final_rewards)], axis=0)
all_mean_grads.append(mean_grads)
optimizer.apply_gradients(zip(all_mean_grads, model.trainable_variables))
env.close()
frames = render_policy_net(model)
plot_animation(frames)
```
# Markov Chains
```
np.random.seed(42)
transition_probabilities = [ # shape=[s, s']
[0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3
[0.0, 0.0, 0.9, 0.1], # from s1 to ...
[0.0, 1.0, 0.0, 0.0], # from s2 to ...
[0.0, 0.0, 0.0, 1.0]] # from s3 to ...
n_max_steps = 50
def print_sequence():
current_state = 0
print("States:", end=" ")
for step in range(n_max_steps):
print(current_state, end=" ")
if current_state == 3:
break
current_state = np.random.choice(range(4), p=transition_probabilities[current_state])
else:
print("...", end="")
print()
for _ in range(10):
print_sequence()
```
# Markov Decision Process
Let's define some transition probabilities, rewards and possible actions. For example, in state s0, if action a0 is chosen then with proba 0.7 we will go to state s0 with reward +10, with probability 0.3 we will go to state s1 with no reward, and with never go to state s2 (so the transition probabilities are `[0.7, 0.3, 0.0]`, and the rewards are `[+10, 0, 0]`):
```
transition_probabilities = [ # shape=[s, a, s']
[[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]],
[[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]],
[None, [0.8, 0.1, 0.1], None]]
rewards = [ # shape=[s, a, s']
[[+10, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, -50]],
[[0, 0, 0], [+40, 0, 0], [0, 0, 0]]]
possible_actions = [[0, 1, 2], [0, 2], [1]]
```
# Q-Value Iteration
```
Q_values = np.full((3, 3), -np.inf) # -np.inf for impossible actions
for state, actions in enumerate(possible_actions):
Q_values[state, actions] = 0.0 # for all possible actions
gamma = 0.90 # the discount factor
history1 = [] # Not shown in the book (for the figure below)
for iteration in range(50):
Q_prev = Q_values.copy()
history1.append(Q_prev) # Not shown
for s in range(3):
for a in possible_actions[s]:
Q_values[s, a] = np.sum([
transition_probabilities[s][a][sp]
* (rewards[s][a][sp] + gamma * np.max(Q_prev[sp]))
for sp in range(3)])
history1 = np.array(history1) # Not shown
Q_values
np.argmax(Q_values, axis=1)
```
The optimal policy for this MDP, when using a discount factor of 0.90, is to choose action a0 when in state s0, and choose action a0 when in state s1, and finally choose action a1 (the only possible action) when in state s2.
Let's try again with a discount factor of 0.95:
```
Q_values = np.full((3, 3), -np.inf) # -np.inf for impossible actions
for state, actions in enumerate(possible_actions):
Q_values[state, actions] = 0.0 # for all possible actions
gamma = 0.95 # the discount factor
for iteration in range(50):
Q_prev = Q_values.copy()
for s in range(3):
for a in possible_actions[s]:
Q_values[s, a] = np.sum([
transition_probabilities[s][a][sp]
* (rewards[s][a][sp] + gamma * np.max(Q_prev[sp]))
for sp in range(3)])
Q_values
np.argmax(Q_values, axis=1)
```
Now the policy has changed! In state s1, we now prefer to go through the fire (choose action a2). This is because the discount factor is larger so the agent values the future more, and it is therefore ready to pay an immediate penalty in order to get more future rewards.
# Q-Learning
Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy).
We will need to simulate an agent moving around in the environment, so let's define a function to perform some action and get the new state and a reward:
```
def step(state, action):
probas = transition_probabilities[state][action]
next_state = np.random.choice([0, 1, 2], p=probas)
reward = rewards[state][action][next_state]
return next_state, reward
```
We also need an exploration policy, which can be any policy, as long as it visits every possible state many times. We will just use a random policy, since the state space is very small:
```
def exploration_policy(state):
return np.random.choice(possible_actions[state])
```
Now let's initialize the Q-Values like earlier, and run the Q-Learning algorithm:
```
np.random.seed(42)
Q_values = np.full((3, 3), -np.inf)
for state, actions in enumerate(possible_actions):
Q_values[state][actions] = 0
alpha0 = 0.05 # initial learning rate
decay = 0.005 # learning rate decay
gamma = 0.90 # discount factor
state = 0 # initial state
history2 = [] # Not shown in the book
for iteration in range(10000):
history2.append(Q_values.copy()) # Not shown
action = exploration_policy(state)
next_state, reward = step(state, action)
next_value = np.max(Q_values[next_state]) # greedy policy at the next step
alpha = alpha0 / (1 + iteration * decay)
Q_values[state, action] *= 1 - alpha
Q_values[state, action] += alpha * (reward + gamma * next_value)
state = next_state
history2 = np.array(history2) # Not shown
Q_values
np.argmax(Q_values, axis=1) # optimal action for each state
true_Q_value = history1[-1, 0, 0]
fig, axes = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
axes[0].set_ylabel("Q-Value$(s_0, a_0)$", fontsize=14)
axes[0].set_title("Q-Value Iteration", fontsize=14)
axes[1].set_title("Q-Learning", fontsize=14)
for ax, width, history in zip(axes, (50, 10000), (history1, history2)):
ax.plot([0, width], [true_Q_value, true_Q_value], "k--")
ax.plot(np.arange(width), history[:, 0, 0], "b-", linewidth=2)
ax.set_xlabel("Iterations", fontsize=14)
ax.axis([0, width, 0, 24])
save_fig("q_value_plot")
```
# Deep Q-Network
Let's build the DQN. Given a state, it will estimate, for each possible action, the sum of discounted future rewards it can expect after it plays that action (but before it sees its outcome):
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
env = gym.make("CartPole-v1")
input_shape = [4] # == env.observation_space.shape
n_outputs = 2 # == env.action_space.n
model = keras.models.Sequential([
keras.layers.Dense(32, activation="elu", input_shape=input_shape),
keras.layers.Dense(32, activation="elu"),
keras.layers.Dense(n_outputs)
])
```
To select an action using this DQN, we just pick the action with the largest predicted Q-value. However, to ensure that the agent explores the environment, we choose a random action with probability `epsilon`.
```
def epsilon_greedy_policy(state, epsilon=0):
if np.random.rand() < epsilon:
return np.random.randint(n_outputs)
else:
Q_values = model.predict(state[np.newaxis])
return np.argmax(Q_values[0])
```
We will also need a replay memory. It will contain the agent's experiences, in the form of tuples: `(obs, action, reward, next_obs, done)`. We can use the `deque` class for that (but make sure to check out DeepMind's excellent [Reverb library](https://github.com/deepmind/reverb) for a much more robust implementation of experience replay):
```
from collections import deque
replay_memory = deque(maxlen=2000)
```
And let's create a function to sample experiences from the replay memory. It will return 5 NumPy arrays: `[obs, actions, rewards, next_obs, dones]`.
```
def sample_experiences(batch_size):
indices = np.random.randint(len(replay_memory), size=batch_size)
batch = [replay_memory[index] for index in indices]
states, actions, rewards, next_states, dones = [
np.array([experience[field_index] for experience in batch])
for field_index in range(5)]
return states, actions, rewards, next_states, dones
```
Now we can create a function that will use the DQN to play one step, and record its experience in the replay memory:
```
def play_one_step(env, state, epsilon):
action = epsilon_greedy_policy(state, epsilon)
next_state, reward, done, info = env.step(action)
replay_memory.append((state, action, reward, next_state, done))
return next_state, reward, done, info
```
Lastly, let's create a function that will sample some experiences from the replay memory and perform a training step:
**Notes**:
* The first 3 releases of the 2nd edition were missing the `reshape()` operation which converts `target_Q_values` to a column vector (this is required by the `loss_fn()`).
* The book uses a learning rate of 1e-3, but in the code below I use 1e-2, as it significantly improves training. I also tuned the learning rates of the DQN variants below.
```
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=1e-2)
loss_fn = keras.losses.mean_squared_error
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
max_next_Q_values = np.max(next_Q_values, axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * max_next_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
```
And now, let's train the model!
```
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step) # Not shown in the book
if step >= best_score: # Not shown
best_weights = model.get_weights() # Not shown
best_score = step # Not shown
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="") # Not shown
if episode > 50:
training_step(batch_size)
model.set_weights(best_weights)
plt.figure(figsize=(8, 4))
plt.plot(rewards)
plt.xlabel("Episode", fontsize=14)
plt.ylabel("Sum of rewards", fontsize=14)
save_fig("dqn_rewards_plot")
plt.show()
env.seed(42)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
Not bad at all! 😀
## Double DQN
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Dense(32, activation="elu", input_shape=[4]),
keras.layers.Dense(32, activation="elu"),
keras.layers.Dense(n_outputs)
])
target = keras.models.clone_model(model)
target.set_weights(model.get_weights())
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=6e-3)
loss_fn = keras.losses.Huber()
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
best_next_actions = np.argmax(next_Q_values, axis=1)
next_mask = tf.one_hot(best_next_actions, n_outputs).numpy()
next_best_Q_values = (target.predict(next_states) * next_mask).sum(axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * next_best_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
replay_memory = deque(maxlen=2000)
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step)
if step >= best_score:
best_weights = model.get_weights()
best_score = step
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="")
if episode >= 50:
training_step(batch_size)
if episode % 50 == 0:
target.set_weights(model.get_weights())
# Alternatively, you can do soft updates at each step:
#if episode >= 50:
#target_weights = target.get_weights()
#online_weights = model.get_weights()
#for index in range(len(target_weights)):
# target_weights[index] = 0.99 * target_weights[index] + 0.01 * online_weights[index]
#target.set_weights(target_weights)
model.set_weights(best_weights)
plt.figure(figsize=(8, 4))
plt.plot(rewards)
plt.xlabel("Episode", fontsize=14)
plt.ylabel("Sum of rewards", fontsize=14)
save_fig("double_dqn_rewards_plot")
plt.show()
env.seed(43)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
# Dueling Double DQN
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
K = keras.backend
input_states = keras.layers.Input(shape=[4])
hidden1 = keras.layers.Dense(32, activation="elu")(input_states)
hidden2 = keras.layers.Dense(32, activation="elu")(hidden1)
state_values = keras.layers.Dense(1)(hidden2)
raw_advantages = keras.layers.Dense(n_outputs)(hidden2)
advantages = raw_advantages - K.max(raw_advantages, axis=1, keepdims=True)
Q_values = state_values + advantages
model = keras.models.Model(inputs=[input_states], outputs=[Q_values])
target = keras.models.clone_model(model)
target.set_weights(model.get_weights())
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=7.5e-3)
loss_fn = keras.losses.Huber()
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
best_next_actions = np.argmax(next_Q_values, axis=1)
next_mask = tf.one_hot(best_next_actions, n_outputs).numpy()
next_best_Q_values = (target.predict(next_states) * next_mask).sum(axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * next_best_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
replay_memory = deque(maxlen=2000)
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step)
if step >= best_score:
best_weights = model.get_weights()
best_score = step
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="")
if episode >= 50:
training_step(batch_size)
if episode % 50 == 0:
target.set_weights(model.get_weights())
model.set_weights(best_weights)
plt.plot(rewards)
plt.xlabel("Episode")
plt.ylabel("Sum of rewards")
plt.show()
env.seed(42)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
This looks like a pretty robust agent!
```
env.close()
```
# Using TF-Agents to Beat Breakout
Let's use TF-Agents to create an agent that will learn to play Breakout. We will use the Deep Q-Learning algorithm, so you can easily compare the components with the previous implementation, but TF-Agents implements many other (and more sophisticated) algorithms!
## TF-Agents Environments
```
tf.random.set_seed(42)
np.random.seed(42)
from tf_agents.environments import suite_gym
env = suite_gym.load("Breakout-v4")
env
env.gym
env.seed(42)
env.reset()
env.step(1) # Fire
img = env.render(mode="rgb_array")
plt.figure(figsize=(6, 8))
plt.imshow(img)
plt.axis("off")
save_fig("breakout_plot")
plt.show()
env.current_time_step()
```
## Environment Specifications
```
env.observation_spec()
env.action_spec()
env.time_step_spec()
```
## Environment Wrappers
You can wrap a TF-Agents environments in a TF-Agents wrapper:
```
from tf_agents.environments.wrappers import ActionRepeat
repeating_env = ActionRepeat(env, times=4)
repeating_env
repeating_env.unwrapped
```
Here is the list of available wrappers:
```
import tf_agents.environments.wrappers
for name in dir(tf_agents.environments.wrappers):
obj = getattr(tf_agents.environments.wrappers, name)
if hasattr(obj, "__base__") and issubclass(obj, tf_agents.environments.wrappers.PyEnvironmentBaseWrapper):
print("{:27s} {}".format(name, obj.__doc__.split("\n")[0]))
```
The `suite_gym.load()` function can create an env and wrap it for you, both with TF-Agents environment wrappers and Gym environment wrappers (the latter are applied first).
```
from functools import partial
from gym.wrappers import TimeLimit
limited_repeating_env = suite_gym.load(
"Breakout-v4",
gym_env_wrappers=[partial(TimeLimit, max_episode_steps=10000)],
env_wrappers=[partial(ActionRepeat, times=4)],
)
limited_repeating_env
limited_repeating_env.unwrapped
```
Create an Atari Breakout environment, and wrap it to apply the default Atari preprocessing steps:
**Warning**: Breakout requires the player to press the FIRE button at the start of the game and after each life lost. The agent may take a very long time learning this because at first it seems that pressing FIRE just means losing faster. To speed up training considerably, we create and use a subclass of the `AtariPreprocessing` wrapper class called `AtariPreprocessingWithAutoFire` which presses FIRE (i.e., plays action 1) automatically at the start of the game and after each life lost. This is different from the book which uses the regular `AtariPreprocessing` wrapper.
```
from tf_agents.environments import suite_atari
from tf_agents.environments.atari_preprocessing import AtariPreprocessing
from tf_agents.environments.atari_wrappers import FrameStack4
max_episode_steps = 27000 # <=> 108k ALE frames since 1 step = 4 frames
environment_name = "BreakoutNoFrameskip-v4"
class AtariPreprocessingWithAutoFire(AtariPreprocessing):
def reset(self, **kwargs):
super().reset(**kwargs)
return self.step(1)[0] # FIRE to start
def step(self, action):
lives_before_action = self.ale.lives()
out = super().step(action)
if self.ale.lives() < lives_before_action and not done:
out = super().step(1) # FIRE to start after life lost
return out
env = suite_atari.load(
environment_name,
max_episode_steps=max_episode_steps,
gym_env_wrappers=[AtariPreprocessingWithAutoFire, FrameStack4])
env
```
Play a few steps just to see what happens:
```
env.seed(42)
env.reset()
for _ in range(4):
time_step = env.step(3) # LEFT
def plot_observation(obs):
# Since there are only 3 color channels, you cannot display 4 frames
# with one primary color per frame. So this code computes the delta between
# the current frame and the mean of the other frames, and it adds this delta
# to the red and blue channels to get a pink color for the current frame.
obs = obs.astype(np.float32)
img = obs[..., :3]
current_frame_delta = np.maximum(obs[..., 3] - obs[..., :3].mean(axis=-1), 0.)
img[..., 0] += current_frame_delta
img[..., 2] += current_frame_delta
img = np.clip(img / 150, 0, 1)
plt.imshow(img)
plt.axis("off")
plt.figure(figsize=(6, 6))
plot_observation(time_step.observation)
save_fig("preprocessed_breakout_plot")
plt.show()
```
Convert the Python environment to a TF environment:
```
from tf_agents.environments.tf_py_environment import TFPyEnvironment
tf_env = TFPyEnvironment(env)
```
## Creating the DQN
Create a small class to normalize the observations. Images are stored using bytes from 0 to 255 to use less RAM, but we want to pass floats from 0.0 to 1.0 to the neural network:
Create the Q-Network:
```
from tf_agents.networks.q_network import QNetwork
preprocessing_layer = keras.layers.Lambda(
lambda obs: tf.cast(obs, np.float32) / 255.)
conv_layer_params=[(32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1)]
fc_layer_params=[512]
q_net = QNetwork(
tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layer,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
```
Create the DQN Agent:
```
from tf_agents.agents.dqn.dqn_agent import DqnAgent
train_step = tf.Variable(0)
update_period = 4 # run a training step every 4 collect steps
optimizer = keras.optimizers.RMSprop(lr=2.5e-4, rho=0.95, momentum=0.0,
epsilon=0.00001, centered=True)
epsilon_fn = keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=1.0, # initial ε
decay_steps=250000 // update_period, # <=> 1,000,000 ALE frames
end_learning_rate=0.01) # final ε
agent = DqnAgent(tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
target_update_period=2000, # <=> 32,000 ALE frames
td_errors_loss_fn=keras.losses.Huber(reduction="none"),
gamma=0.99, # discount factor
train_step_counter=train_step,
epsilon_greedy=lambda: epsilon_fn(train_step))
agent.initialize()
```
Create the replay buffer (this will use a lot of RAM, so please reduce the buffer size if you get an out-of-memory error):
**Warning**: we use a replay buffer of size 100,000 instead of 1,000,000 (as used in the book) since many people were getting OOM (Out-Of-Memory) errors.
```
from tf_agents.replay_buffers import tf_uniform_replay_buffer
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=tf_env.batch_size,
max_length=100000) # reduce if OOM error
replay_buffer_observer = replay_buffer.add_batch
```
Create a simple custom observer that counts and displays the number of times it is called (except when it is passed a trajectory that represents the boundary between two episodes, as this does not count as a step):
```
class ShowProgress:
def __init__(self, total):
self.counter = 0
self.total = total
def __call__(self, trajectory):
if not trajectory.is_boundary():
self.counter += 1
if self.counter % 100 == 0:
print("\r{}/{}".format(self.counter, self.total), end="")
```
Let's add some training metrics:
```
from tf_agents.metrics import tf_metrics
train_metrics = [
tf_metrics.NumberOfEpisodes(),
tf_metrics.EnvironmentSteps(),
tf_metrics.AverageReturnMetric(),
tf_metrics.AverageEpisodeLengthMetric(),
]
train_metrics[0].result()
from tf_agents.eval.metric_utils import log_metrics
import logging
logging.getLogger().setLevel(logging.INFO)
log_metrics(train_metrics)
```
Create the collect driver:
```
from tf_agents.drivers.dynamic_step_driver import DynamicStepDriver
collect_driver = DynamicStepDriver(
tf_env,
agent.collect_policy,
observers=[replay_buffer_observer] + train_metrics,
num_steps=update_period) # collect 4 steps for each training iteration
```
Collect the initial experiences, before training:
```
from tf_agents.policies.random_tf_policy import RandomTFPolicy
initial_collect_policy = RandomTFPolicy(tf_env.time_step_spec(),
tf_env.action_spec())
init_driver = DynamicStepDriver(
tf_env,
initial_collect_policy,
observers=[replay_buffer.add_batch, ShowProgress(20000)],
num_steps=20000) # <=> 80,000 ALE frames
final_time_step, final_policy_state = init_driver.run()
```
Let's sample 2 sub-episodes, with 3 time steps each and display them:
**Note**: `replay_buffer.get_next()` is deprecated. We must use `replay_buffer.as_dataset(..., single_deterministic_pass=False)` instead.
```
tf.random.set_seed(9) # chosen to show an example of trajectory at the end of an episode
#trajectories, buffer_info = replay_buffer.get_next( # get_next() is deprecated
# sample_batch_size=2, num_steps=3)
trajectories, buffer_info = next(iter(replay_buffer.as_dataset(
sample_batch_size=2,
num_steps=3,
single_deterministic_pass=False)))
trajectories._fields
trajectories.observation.shape
from tf_agents.trajectories.trajectory import to_transition
time_steps, action_steps, next_time_steps = to_transition(trajectories)
time_steps.observation.shape
trajectories.step_type.numpy()
plt.figure(figsize=(10, 6.8))
for row in range(2):
for col in range(3):
plt.subplot(2, 3, row * 3 + col + 1)
plot_observation(trajectories.observation[row, col].numpy())
plt.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0.02)
save_fig("sub_episodes_plot")
plt.show()
```
Now let's create the dataset:
```
dataset = replay_buffer.as_dataset(
sample_batch_size=64,
num_steps=2,
num_parallel_calls=3).prefetch(3)
```
Convert the main functions to TF Functions for better performance:
```
from tf_agents.utils.common import function
collect_driver.run = function(collect_driver.run)
agent.train = function(agent.train)
```
And now we are ready to run the main loop!
```
def train_agent(n_iterations):
time_step = None
policy_state = agent.collect_policy.get_initial_state(tf_env.batch_size)
iterator = iter(dataset)
for iteration in range(n_iterations):
time_step, policy_state = collect_driver.run(time_step, policy_state)
trajectories, buffer_info = next(iterator)
train_loss = agent.train(trajectories)
print("\r{} loss:{:.5f}".format(
iteration, train_loss.loss.numpy()), end="")
if iteration % 1000 == 0:
log_metrics(train_metrics)
```
Run the next cell to train the agent for 50,000 steps. Then look at its behavior by running the following cell. You can run these two cells as many times as you wish. The agent will keep improving! It will likely take over 200,000 iterations for the agent to become reasonably good.
```
train_agent(n_iterations=50000)
frames = []
def save_frames(trajectory):
global frames
frames.append(tf_env.pyenv.envs[0].render(mode="rgb_array"))
watch_driver = DynamicStepDriver(
tf_env,
agent.policy,
observers=[save_frames, ShowProgress(1000)],
num_steps=1000)
final_time_step, final_policy_state = watch_driver.run()
plot_animation(frames)
```
If you want to save an animated GIF to show off your agent to your friends, here's one way to do it:
```
import PIL
image_path = os.path.join("images", "rl", "breakout.gif")
frame_images = [PIL.Image.fromarray(frame) for frame in frames[:150]]
frame_images[0].save(image_path, format='GIF',
append_images=frame_images[1:],
save_all=True,
duration=30,
loop=0)
%%html
<img src="images/rl/breakout.gif" />
```
# Extra material
## Deque vs Rotating List
The `deque` class offers fast append, but fairly slow random access (for large replay memories):
```
from collections import deque
np.random.seed(42)
mem = deque(maxlen=1000000)
for i in range(1000000):
mem.append(i)
[mem[i] for i in np.random.randint(1000000, size=5)]
%timeit mem.append(1)
%timeit [mem[i] for i in np.random.randint(1000000, size=5)]
```
Alternatively, you could use a rotating list like this `ReplayMemory` class. This would make random access faster for large replay memories:
```
class ReplayMemory:
def __init__(self, max_size):
self.buffer = np.empty(max_size, dtype=np.object)
self.max_size = max_size
self.index = 0
self.size = 0
def append(self, obj):
self.buffer[self.index] = obj
self.size = min(self.size + 1, self.max_size)
self.index = (self.index + 1) % self.max_size
def sample(self, batch_size):
indices = np.random.randint(self.size, size=batch_size)
return self.buffer[indices]
mem = ReplayMemory(max_size=1000000)
for i in range(1000000):
mem.append(i)
mem.sample(5)
%timeit mem.append(1)
%timeit mem.sample(5)
```
## Creating a Custom TF-Agents Environment
To create a custom TF-Agent environment, you just need to write a class that inherits from the `PyEnvironment` class and implements a few methods. For example, the following minimal environment represents a simple 4x4 grid. The agent starts in one corner (0,0) and must move to the opposite corner (3,3). The episode is done if the agent reaches the goal (it gets a +10 reward) or if the agent goes out of bounds (-1 reward). The actions are up (0), down (1), left (2) and right (3).
```
class MyEnvironment(tf_agents.environments.py_environment.PyEnvironment):
def __init__(self, discount=1.0):
super().__init__()
self._action_spec = tf_agents.specs.BoundedArraySpec(
shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(
shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)
self.discount = discount
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = np.zeros(2, dtype=np.int32)
obs = np.zeros((4, 4), dtype=np.int32)
obs[self._state[0], self._state[1]] = 1
return tf_agents.trajectories.time_step.restart(obs)
def _step(self, action):
self._state += [(-1, 0), (+1, 0), (0, -1), (0, +1)][action]
reward = 0
obs = np.zeros((4, 4), dtype=np.int32)
done = (self._state.min() < 0 or self._state.max() > 3)
if not done:
obs[self._state[0], self._state[1]] = 1
if done or np.all(self._state == np.array([3, 3])):
reward = -1 if done else +10
return tf_agents.trajectories.time_step.termination(obs, reward)
else:
return tf_agents.trajectories.time_step.transition(obs, reward,
self.discount)
```
The action and observation specs will generally be instances of the `ArraySpec` or `BoundedArraySpec` classes from the `tf_agents.specs` package (check out the other specs in this package as well). Optionally, you can also define a `render()` method, a `close()` method to free resources, as well as a `time_step_spec()` method if you don't want the `reward` and `discount` to be 32-bit float scalars. Note that the base class takes care of keeping track of the current time step, which is why we must implement `_reset()` and `_step()` rather than `reset()` and `step()`.
```
my_env = MyEnvironment()
time_step = my_env.reset()
time_step
time_step = my_env.step(1)
time_step
```
| github_jupyter |
# Analyzing Startup Fundraising Deals from Crunchbase
Skills: Numpy, Pandas, Chunking Dataframe, etc.
In this project, we'll analyze startup investments from Crunchbase.com.
```
import numpy as np
import pandas as pd
pd.options.display.max_columns = 99
first_five = pd.read_csv('loans_2007.csv').head()
first_five
thousand_row = pd.read_csv('loans_2007.csv', chunksize=1000)
chunk_mem = []
for chunk in thousand_row:
chunk_mem.append(chunk.memory_usage(deep=True).sum() / 2**20)
chunk_mem
# Let's try 3000 row chunk
throusand_row = pd.read_csv('loans_2007.csv', chunksize=3000)
chunk_mem = []
for chunk in throusand_row:
chunk_mem.append(chunk.memory_usage(deep=True).sum() / 2**20)
chunk_mem
# How many columns has numeric type?
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000)
numeric_column = []
string_column = []
for chunk in chunk_iter:
numeric_column.append(len(chunk.select_dtypes(include=np.number).columns))
string_column.append(len(chunk.select_dtypes(include='object').columns))
print(numeric_column)
print(string_column)
# How many unique values are there in each string column?
# How many of the string columns contain values that are less than 50% unique?
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000)
for chunk in chunk_iter:
str_columns = chunk.select_dtypes(include='object')
str_col_names = str_columns.columns
need_category = 0
for column in str_col_names:
num_unique = len(chunk[column].unique())
num_total = len(chunk[column])
if num_unique/num_total < 0.5:
print("There are {} unique values in {} column.".format(num_unique, column))
need_category += 1
print("There are {} columns contain less than 50% unique values".format(need_category))
# Which float columns have no missing values and could be candidates for conversion to the integer type?
chunk_iter = pd.read_csv('loans_2007.csv',chunksize=3000)
missing = []
for chunk in chunk_iter:
floats = chunk.select_dtypes(include=['float'])
missing.append(floats.apply(pd.isnull).sum())
combined_missing = pd.concat(missing)
combined_missing.groupby(combined_missing.index).sum().sort_values()
# Calculate the total memory usage across all of the chunks.
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000)
chunk_mem = []
for chunk in chunk_iter:
chunk_mem.append(chunk.memory_usage(deep=True).sum() / 2**20)
sum(chunk_mem)
# Determine which string columns you can convert to a numeric type if you clean them. For example, the int_rate column is only a string because of the % sign at the end.
list_str_col = str_col_names.tolist()
list_str_col
useful_obj_cols = ['term', 'sub_grade', 'emp_title', 'home_ownership', 'verification_status', 'issue_d', 'purpose', 'earliest_cr_line', 'revol_util', 'last_pymnt_d', 'last_credit_pull_d']
## Create dictionary (key: column, value: list of Series objects representing each chunk's value counts)
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000)
str_cols_vc = {}
for chunk in chunk_iter:
str_cols = chunk.select_dtypes(include=['object'])
for col in str_cols.columns:
current_col_vc = str_cols[col].value_counts()
if col in str_cols_vc:
str_cols_vc[col].append(current_col_vc)
else:
str_cols_vc[col] = [current_col_vc]
## Combine the value counts.
combined_vcs = {}
for col in str_cols_vc:
combined_vc = pd.concat(str_cols_vc[col])
final_vc = combined_vc.groupby(combined_vc.index).sum()
combined_vcs[col] = final_vc
for col in useful_obj_cols:
print(col)
print(combined_vcs[col])
print("-----------")
# Convert to category
convert_col_dtypes = {
"sub_grade": "category", "home_ownership": "category",
"verification_status": "category", "purpose": "category"
}
chunk[useful_obj_cols]
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000, dtype=convert_col_dtypes, parse_dates=["issue_d", "earliest_cr_line", "last_pymnt_d", "last_credit_pull_d"])
for chunk in chunk_iter:
term_cleaned = chunk['term'].str.lstrip(" ").str.rstrip(" months")
revol_cleaned = chunk['revol_util'].str.rstrip("%")
chunk['term'] = pd.to_numeric(term_cleaned)
chunk['revol_util'] = pd.to_numeric(revol_cleaned)
chunk.dtypes
# optimize numeric added
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000, dtype=convert_col_dtypes, parse_dates=["issue_d", "earliest_cr_line", "last_pymnt_d", "last_credit_pull_d"])
mv_counts = {}
chunk_mem = []
for chunk in chunk_iter:
term_cleaned = chunk['term'].str.lstrip(" ").str.rstrip(" months")
revol_cleaned = chunk['revol_util'].str.rstrip("%")
chunk['term'] = pd.to_numeric(term_cleaned)
chunk['revol_util'] = pd.to_numeric(revol_cleaned)
chunk = chunk.dropna(how='all')
float_cols = chunk.select_dtypes(include=['float'])
for col in float_cols.columns:
missing_values = len(chunk) - chunk[col].count()
if col in mv_counts:
mv_counts[col] = mv_counts[col] + missing_values
else:
mv_counts[col] = missing_values
mv_counts
# optimize numeric added
chunk_iter = pd.read_csv('loans_2007.csv', chunksize=3000, dtype=convert_col_dtypes, parse_dates=["issue_d", "earliest_cr_line", "last_pymnt_d", "last_credit_pull_d"])
mv_counts = {}
chunk_mem = []
for chunk in chunk_iter:
term_cleaned = chunk['term'].str.lstrip(" ").str.rstrip(" months")
revol_cleaned = chunk['revol_util'].str.rstrip("%")
chunk['term'] = pd.to_numeric(term_cleaned)
chunk['revol_util'] = pd.to_numeric(revol_cleaned)
chunk = chunk.dropna(how='all')
float_cols = chunk.select_dtypes(include=['float'])
for col in float_cols.columns:
missing_values = len(chunk) - chunk[col].count()
if col in mv_counts:
mv_counts[col] = mv_counts[col] + missing_values
else:
mv_counts[col] = missing_values
chunk_mem.append(chunk.memory_usage(deep=True).sum() / 2**20)
sum(chunk_mem)
```
We can reduce total memory footprint from 62.22 MB to 42.38 MB (Total about 31.8 % file reduction)
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
# Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
```
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
```
## Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
1. Add a *start* and *end* token to each sentence.
2. Clean the sentences by removing special characters.
3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
4. Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
```
### Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
```
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
```
### Create a tf.data dataset
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
N_BATCH = BUFFER_SIZE//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
```
## Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
We're using *Bahdanau attention*. Lets decide on notation before writing the simplified form:
* FC = Fully connected (dense) layer
* EO = Encoder output
* H = hidden state
* X = input to the decoder
And the pseudo-code:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, 1)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
* `embedding output` = The input to the decoder X is passed through an embedding layer.
* `merged vector = concat(embedding output, context vector)`
* This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
```
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.enc_units)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.dec_units)
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.W1 = tf.keras.layers.Dense(self.dec_units)
self.W2 = tf.keras.layers.Dense(self.dec_units)
self.V = tf.keras.layers.Dense(1)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying tanh(FC(EO) + FC(H)) to self.V
score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size * 1, vocab)
x = self.fc(output)
return x, state, attention_weights
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.dec_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
```
## Define the optimizer and the loss function
```
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
```
## Checkpoints (Object-based saving)
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## Training
1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.
3. The decoder returns the *predictions* and the *decoder hidden state*.
4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
5. Use *teacher forcing* to decide the next input to the decoder.
6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.
7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
```
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / N_BATCH))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## Translate
* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
* Stop predicting when the model predicts the *end token*.
* And store the *attention weights for every time step*.
Note: The encoder output is calculated only once for one input.
```
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
plt.show()
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## Restore the latest checkpoint and test
```
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate(u'trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
```
## Next steps
* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.
* Experiment with training on a larger dataset, or using more epochs
| github_jupyter |
## SIMPLE CHAR-RNN
```
from __future__ import print_function
import tensorflow as tf
import numpy as np
from tensorflow.contrib import rnn
tf.set_random_seed(0)
print ("TENSORFLOW VERSION IS %s" % (tf.__version__))
```
## DEFINE TRAINING SEQUENCE
```
quote1 = ("If you want to build a ship, "
"don't drum up people to collect wood and don't assign them tasks and work,"
" but rather teach them to long for the endless immensity of the sea.")
quote2 = ("Perfection is achieved, "
"not when there is nothing more to add, "
"but when there is nothing left to take away.")
sentence = quote2
print ("FOLLOWING IS OUR TRAINING SEQUENCE:")
print (sentence)
```
## DEFINE VOCABULARY AND DICTIONARY
```
char_set = list(set(sentence))
char_dic = {w: i for i, w in enumerate(char_set)}
print ("VOCABULARY: ")
print (char_set)
print ("DICTIONARY: ")
print (char_dic)
```
VOCAB: NUMBER => CHAR / DICTIONARY: CHAR => NUMBER
## CONFIGURE NETWORK
```
data_dim = len(char_set)
num_classes = len(char_set)
hidden_size = 64
sequence_length = 10 # Any arbitrary number
print ("DATA_DIM IS [%d]" % (data_dim))
```
## SET TRAINING BATCHES
```
def print_np(_name, _x):
print("TYPE OF [%s] is [%s]" % (_name, type(_x)))
print("SHAPE OF [%s] is %s" % (_name, _x.shape,))
def print_list(_name, _x):
print("TYPE OF [%s] is [%s]" % (_name, type(_x)))
print("LENGTH OF [%s] is %s" % (_name, len(_x)))
print("%s[0] LOOKS LIKE %s" % (_name, _x[0]))
dataX = []
dataY = []
for i in range(0, len(sentence) - sequence_length):
x_str = sentence[i:i + sequence_length]
y_str = sentence[i + 1: i + sequence_length + 1]
x = [char_dic[c] for c in x_str] # x str to index
y = [char_dic[c] for c in y_str] # y str to index
dataX.append(x)
dataY.append(y)
if i < 5:
print ("[%4d/%4d] [%s]=>[%s]" % (i, len(sentence), x_str, y_str))
print ("%s%s => %s" % (' '*12, x, y))
print_list('dataX', dataX)
print_list('dataY', dataY)
ndata = len(dataX)
batch_size = 512
print (" 'NDATA' IS %d" % (ndata))
print ("'BATCH_SIZE' IS %d" % (batch_size))
```
## DEFINE PLACEHOLDERS
```
X = tf.placeholder(tf.int32, [None, sequence_length])
X_OH = tf.one_hot(X, num_classes)
Y = tf.placeholder(tf.int32, [None, sequence_length])
print ("'sequence_length' IS [%d]" % (sequence_length))
print (" 'num_classes' IS [%d]" % (num_classes))
print("'X' LOOKS LIKE \n [%s]" % (X))
print("'X_OH' LOOKS LIKE \n [%s]" % (X_OH))
print("'Y' LOOKS LIKE \n [%s]" % (Y))
```
## DEFINE MODEL
```
with tf.variable_scope('CHAR-RNN', reuse=False):
cell = rnn.BasicLSTMCell(hidden_size, state_is_tuple=True, reuse=False)
# cell = rnn.MultiRNNCell([cell]*2, state_is_tuple=True) # BUG IN TF1.1..
# DYNAMIC RNN WITH FULLY CONNECTED LAYER
_hiddens = tf.contrib.layers.fully_connected(X_OH, hidden_size, activation_fn=tf.nn.relu)
_rnnouts, _states = tf.nn.dynamic_rnn(cell, _hiddens, dtype=tf.float32)
_denseouts = tf.contrib.layers.fully_connected(_rnnouts, num_classes, activation_fn=None)
# RESHAPE FOR SEQUNCE LOSS
outputs = tf.reshape(_denseouts, [batch_size, sequence_length, num_classes])
print ("_hiddens LOOKS LIKE [%s]" % (_hiddens))
print ("_rnnouts LOOKS LIKE [%s]" % (_rnnouts))
print ("_denseouts LOOKS LIKE [%s]" % (_denseouts))
print ("outputs LOOKS LIKE [%s]" % (outputs))
print ("MODEL DEFINED.")
```
## DEFINE TF FUNCTIONS
```
weights = tf.ones([batch_size, sequence_length]) # EQUAL WEIGHTS
seq_loss = tf.contrib.seq2seq.sequence_loss(
logits=outputs, targets=Y, weights=weights) # THIS IS A CLASSIFICATION LOSS
print ("weights LOOKS LIKE [%s]" % (weights))
print ("outputs LOOKS LIKE [%s]" % (outputs))
print ("Y LOOKS LIKE [%s]" % (Y))
loss = tf.reduce_mean(seq_loss)
optm = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
print ("FUNCTIONS DEFINED.")
```
## OPTIMIZE
```
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer())
MAXITER = 2000
for i in range(MAXITER):
randidx = np.random.randint(low=0, high=ndata, size=batch_size)
batchX = [dataX[iii] for iii in randidx]
batchY = [dataY[iii] for iii in randidx]
feeds = {X: batchX, Y: batchY}
_, loss_val, results = sess.run(
[optm, loss, outputs], feed_dict=feeds)
if (i%200) == 0:
print ("[%5d/%d] loss_val: %.5f " % (i, MAXITER, loss_val))
```
#### BATCH LOOKS LIKE
```
print ("LENGTH OF BATCHX IS %d" % (len(batchX)))
print ("batchX[0] looks like %s" % (batchX[0]))
print ("LENGTH OF BATCHY IS %d" % (len(batchY)))
print ("batchY[0] looks like %s" % (batchY[0]))
```
## PRINT CHARS
```
randidx = np.random.randint(low=0, high=ndata, size=batch_size)
batchX = [dataX[iii] for iii in randidx]
batchY = [dataY[iii] for iii in randidx]
feeds = {X: batchX}
results = sess.run(outputs, feed_dict=feeds)
for j, result in enumerate(results):
index = np.argmax(result, axis=1)
chars = [char_set[t] for t in index]
if j < 10:
print ("OUT OF BATCHX: %s => %s" % (index, chars))
print ("BATCHY (TARGET): %s\n" % (batchY[j]))
```
### SAMPLING FUNCTION
```
LEN = 1 # <= LENGHT IS 1 !!
# XL = tf.placeholder(tf.int32, [None, LEN])
XL = tf.placeholder(tf.int32, [None, 1])
XL_OH = tf.one_hot(XL, num_classes)
with tf.variable_scope('CHAR-RNN', reuse=True):
cell_L = rnn.BasicLSTMCell(hidden_size, state_is_tuple=True, reuse=True)
# cell_L = rnn.MultiRNNCell([cell_L] * 2, state_is_tuple=True) # BUG IN TF1.1
istate = cell_L.zero_state(batch_size=1, dtype=tf.float32)
# DYNAMIC RNN WITH FULLY CONNECTED LAYER
_hiddens = tf.contrib.layers.fully_connected(XL_OH, hidden_size, activation_fn=tf.nn.relu)
_outputs_L, states_L = tf.nn.dynamic_rnn(cell_L, _hiddens
, initial_state=istate, dtype=tf.float32)
_outputs_L = tf.contrib.layers.fully_connected(
_outputs_L, num_classes, activation_fn=None)
# RESHAPE FOR SEQUNCE LOSS
outputs_L = tf.reshape(_outputs_L, [LEN, 1, num_classes])
print ("XL LOOKS LIKE %s" % (XL))
print ("XL_OH LOOKS LIKE %s" % (XL_OH))
```
#### HELPER FUNCTION
```
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
def softmax(x):
alpha = 1
e_x = np.exp(alpha*(x - np.max(x)))
return e_x / np.sum(e_x) # only difference
```
## SAMPLE
### BURNIN
```
prime = "Perfection is"
istateval = sess.run(cell_L.zero_state(1, tf.float32))
for i, c in enumerate(prime[:-1]):
index = char_dic[c]
inval = [[index]]
outval, stateval = sess.run([outputs_L, states_L]
, feed_dict={XL:inval, istate:istateval})
istateval = stateval # UPDATE STATE MANUALLY!!
if i < 3:
print ("[%d] -char: %s \n -inval: %s \n -outval: %s "
% (i, c, inval, outval))
```
### SAMPLE
```
inval = [[char_dic[prime[-1]]]]
outval, stateval = sess.run([outputs_L, states_L]
, feed_dict={XL:inval, istate:istateval})
istateval = stateval
index = np.argmax(outval)
char = char_set[index]
chars = char
for i in range(100):
inval = [[index]]
outval, stateval = sess.run([outputs_L, states_L]
, feed_dict={XL:inval, istate:istateval})
istateval = stateval
# index = np.argmax(outval)
index = weighted_pick(softmax(outval))
char = char_set[index]
chars += char
if i < 5:
print ("[%d] \n -inval: %s \n -outval: %s \n -index: %d (char: %s) \n -chars: %s"
% (i, inval, outval, index, char, chars))
```
### SAMPLED SENTENCE
```
print ("<SAMPLED SETENCE> \n %s" % (prime+chars))
print ("\n<ORIGINAL SENTENCE> \n %s" % (sentence))
test complete; Gopal
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Training, hyperparameter tune, and deploy with Keras
## Introduction
This tutorial shows how to train a simple deep neural network using the MNIST dataset and Keras on Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of `28x28` pixels, representing number from 0 to 9. The goal is to create a multi-class classifier to identify the digit each image represents, and deploy it as a web service in Azure.
For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/).
## Prerequisite:
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:
* install the AML SDK
* create a workspace and its configuration file (`config.json`)
* For local scoring test, you will also need to have `tensorflow` and `keras` installed in the current Jupyter kernel.
Let's get started. First let's import some Python libraries.
```
%matplotlib inline
import numpy as np
import os
import matplotlib.pyplot as plt
import azureml
from azureml.core import Workspace
# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
```
## Create an Azure ML experiment
Let's create an experiment named "keras-mnist" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.
```
from azureml.core import Experiment
script_folder = './keras-mnist'
os.makedirs(script_folder, exist_ok=True)
exp = Experiment(workspace=ws, name='keras-mnist')
```
## Explore data
Before you train a model, you need to understand the data that you are using to train it. In this section you learn how to:
* Download the MNIST dataset
* Display some sample images
### Download the MNIST dataset
Download the MNIST dataset and save the files into a `data` directory locally. Images and labels for both training and testing are downloaded.
```
import urllib.request
data_folder = os.path.join(os.getcwd(), 'data')
os.makedirs(data_folder, exist_ok=True)
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'train-images.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'train-labels.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))
```
### Display some sample images
Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. Note this step requires a `load_data` function that's included in an `utils.py` file. This file is included in the sample folder. Please make sure it is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
```
# make sure utils.py is in the same directory as this code
from utils import load_data, one_hot_encode
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
# now let's show some randomly chosen images from the training set.
count = 0
sample_size = 30
plt.figure(figsize = (16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
count = count + 1
plt.subplot(1, sample_size, count)
plt.axhline('')
plt.axvline('')
plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
plt.show()
```
Now you have an idea of what these images look like and the expected prediction outcome.
## Create a FileDataset
A FileDataset references one or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
```
from azureml.core.dataset import Dataset
web_paths = [
'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
]
dataset = Dataset.File.from_files(path = web_paths)
```
Use the `register()` method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
```
dataset = dataset.register(workspace = ws,
name = 'mnist dataset',
description='training and test dataset',
create_new_version=True)
```
## Create or Attach existing AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:
1. create the configuration (this step is local and only takes a second)
2. create the cluster (this step will take about **20 seconds**)
3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
```
Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named "gpu-cluster" of type `AmlCompute`.
```
compute_targets = ws.compute_targets
for name, ct in compute_targets.items():
print(name, ct.type, ct.provisioning_state)
```
## Copy the training files into the script folder
The Keras training script is already created for you. You can simply copy it into the script folder, together with the utility library used to load compressed data file into numpy array.
```
import shutil
# the training logic is in the keras_mnist.py file.
shutil.copy('./keras_mnist.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
```
## Construct neural network in Keras
In the training script `keras_mnist.py`, it creates a very simple DNN (deep neural network), with just 2 hidden layers. The input layer has 28 * 28 = 784 neurons, each representing a pixel in an image. The first hidden layer has 300 neurons, and the second hidden layer has 100 neurons. The output layer has 10 neurons, each representing a targeted label from 0 to 9.

### Azure ML concepts
Please note the following three things in the code below:
1. The script accepts arguments using the argparse package. In this case there is one argument `--data_folder` which specifies the FileDataset in which the script can find the MNIST data
```
parser = argparse.ArgumentParser()
parser.add_argument('--data_folder')
```
2. The script is accessing the Azure ML `Run` object by executing `run = Run.get_context()`. Further down the script is using the `run` to report the loss and accuracy at the end of each epoch via callback.
```
run.log('Loss', log['loss'])
run.log('Accuracy', log['acc'])
```
3. When running the script on Azure ML, you can write files out to a folder `./outputs` that is relative to the root directory. This folder is specially tracked by Azure ML in the sense that any files written to that folder during script execution on the remote target will be picked up by Run History; these files (known as artifacts) will be available as part of the run history record.
The next cell will print out the training code for you to inspect.
```
with open(os.path.join(script_folder, './keras_mnist.py'), 'r') as f:
print(f.read())
```
## Create TensorFlow estimator & add Keras
Next, we construct an `azureml.train.dnn.TensorFlow` estimator object, use the `gpu-cluster` as compute target, and pass the mount-point of the datastore to the training code as a parameter.
The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed. In this case, we add `keras` package (for the Keras framework obviously), and `matplotlib` package for plotting a "Loss vs. Accuracy" chart and record it in run history.
```
dataset = Dataset.get_by_name(ws, 'mnist dataset')
# list the files referenced by mnist dataset
dataset.to_path()
from azureml.train.dnn import TensorFlow
script_params = {
'--data-folder': dataset.as_named_input('mnist').as_mount(),
'--batch-size': 50,
'--first-layer-neurons': 300,
'--second-layer-neurons': 100,
'--learning-rate': 0.001
}
est = TensorFlow(source_directory=script_folder,
script_params=script_params,
compute_target=compute_target,
entry_script='keras_mnist.py',
pip_packages=['keras==2.2.5','azureml-dataprep[pandas,fuse]','matplotlib'])
```
## Submit job to run
Submit the estimator to the Azure ML experiment to kick off the execution.
```
run = exp.submit(est)
```
### Monitor the Run
As the Run is executed, it will go through the following stages:
1. Preparing: A docker image is created matching the Python environment specified by the TensorFlow estimator and it will be uploaded to the workspace's Azure Container Registry. This step will only happen once for each Python environment -- the container will then be cached for subsequent runs. Creating and uploading the image takes about **5 minutes**. While the job is preparing, logs are streamed to the run history and can be viewed to monitor the progress of the image creation.
2. Scaling: If the compute needs to be scaled up (i.e. the AmlCompute cluster requires more nodes to execute the run than currently available), the cluster will attempt to scale up in order to make the required amount of nodes available. Scaling typically takes about **5 minutes**.
3. Running: All scripts in the script folder are uploaded to the compute target, data stores are mounted/copied and the `entry_script` is executed. While the job is running, stdout and the `./logs` folder are streamed to the run history and can be viewed to monitor the progress of the run.
4. Post-Processing: The `./outputs` folder of the run is copied over to the run history
There are multiple ways to check the progress of a running job. We can use a Jupyter notebook widget.
**Note: The widget will automatically update ever 10-15 seconds, always showing you the most up-to-date information about the run**
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
We can also periodically check the status of the run object, and navigate to Azure portal to monitor the run.
```
run
run.wait_for_completion(show_output=True)
```
In the outputs of the training script, it prints out the Keras version number. Please make a note of it.
### The Run object
The Run object provides the interface to the run history -- both to the job and to the control plane (this notebook), and both while the job is running and after it has completed. It provides a number of interesting features for instance:
* `run.get_details()`: Provides a rich set of properties of the run
* `run.get_metrics()`: Provides a dictionary with all the metrics that were reported for the Run
* `run.get_file_names()`: List all the files that were uploaded to the run history for this Run. This will include the `outputs` and `logs` folder, azureml-logs and other logs, as well as files that were explicitly uploaded to the run using `run.upload_file()`
Below are some examples -- please run through them and inspect their output.
```
run.get_details()
run.get_metrics()
run.get_file_names()
```
## Download the saved model
In the training script, the Keras model is saved into two files, `model.json` and `model.h5`, in the `outputs/models` folder on the gpu-cluster AmlCompute node. Azure ML automatically uploaded anything written in the `./outputs` folder into run history file store. Subsequently, we can use the `run` object to download the model files. They are under the the `outputs/model` folder in the run history file store, and are downloaded into a local folder named `model`.
```
# create a model folder in the current directory
os.makedirs('./model', exist_ok=True)
for f in run.get_file_names():
if f.startswith('outputs/model'):
output_file_path = os.path.join('./model', f.split('/')[-1])
print('Downloading from {} to {} ...'.format(f, output_file_path))
run.download_file(name=f, output_file_path=output_file_path)
```
## Predict on the test set
Let's check the version of the local Keras. Make sure it matches with the version number printed out in the training script. Otherwise you might not be able to load the model properly.
```
import keras
import tensorflow as tf
print("Keras version:", keras.__version__)
print("Tensorflow version:", tf.__version__)
```
Now let's load the downloaded model.
```
from keras.models import model_from_json
# load json and create model
json_file = open('model/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model/model.h5")
print("Model loaded from disk.")
```
Feed test dataset to the persisted model to get predictions.
```
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
y_test_ohe = one_hot_encode(y_test, 10)
y_hat = np.argmax(loaded_model.predict(X_test), axis=1)
# print the first 30 labels and predictions
print('labels: \t', y_test[:30])
print('predictions:\t', y_hat[:30])
```
Calculate the overall accuracy by comparing the predicted value against the test set.
```
print("Accuracy on the test set:", np.average(y_hat == y_test))
```
## Intelligent hyperparameter tuning
We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.
```
from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform
ps = RandomParameterSampling(
{
'--batch-size': choice(25, 50, 100),
'--first-layer-neurons': choice(10, 50, 200, 300, 500),
'--second-layer-neurons': choice(10, 50, 200, 500),
'--learning-rate': loguniform(-6, -1)
}
)
```
Next, we will create a new estimator without the above parameters since they will be passed in later by Hyperdrive configuration. Note we still need to keep the `data-folder` parameter since that's not a hyperparamter we will sweep.
```
est = TensorFlow(source_directory=script_folder,
script_params={'--data-folder': dataset.as_named_input('mnist').as_mount()},
compute_target=compute_target,
entry_script='keras_mnist.py',
pip_packages=['keras==2.2.5','azureml-dataprep[pandas,fuse]','matplotlib'])
```
Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.
```
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
```
Now we are ready to configure a run configuration object, and specify the primary metric `Accuracy` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster.
```
hdc = HyperDriveConfig(estimator=est,
hyperparameter_sampling=ps,
policy=policy,
primary_metric_name='Accuracy',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=20,
max_concurrent_runs=4)
```
Finally, let's launch the hyperparameter tuning job.
```
hdr = exp.submit(config=hdc)
```
We can use a run history widget to show the progress. Be patient as this might take a while to complete.
```
RunDetails(hdr).show()
hdr.wait_for_completion(show_output=True)
```
### Warm start a Hyperparameter Tuning experiment and resuming child runs
Often times, finding the best hyperparameter values for your model can be an iterative process, needing multiple tuning runs that learn from previous hyperparameter tuning runs. Reusing knowledge from these previous runs will accelerate the hyperparameter tuning process, thereby reducing the cost of tuning the model and will potentially improve the primary metric of the resulting model. When warm starting a hyperparameter tuning experiment with Bayesian sampling, trials from the previous run will be used as prior knowledge to intelligently pick new samples, so as to improve the primary metric. Additionally, when using Random or Grid sampling, any early termination decisions will leverage metrics from the previous runs to determine poorly performing training runs.
Azure Machine Learning allows you to warm start your hyperparameter tuning run by leveraging knowledge from up to 5 previously completed hyperparameter tuning parent runs.
Additionally, there might be occasions when individual training runs of a hyperparameter tuning experiment are cancelled due to budget constraints or fail due to other reasons. It is now possible to resume such individual training runs from the last checkpoint (assuming your training script handles checkpoints). Resuming an individual training run will use the same hyperparameter configuration and mount the storage used for that run. The training script should accept the "--resume-from" argument, which contains the checkpoint or model files from which to resume the training run. You can also resume individual runs as part of an experiment that spends additional budget on hyperparameter tuning. Any additional budget, after resuming the specified training runs is used for exploring additional configurations.
For more information on warm starting and resuming hyperparameter tuning runs, please refer to the [Hyperparameter Tuning for Azure Machine Learning documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters)
## Find and register best model
When all the jobs finish, we can find out the one that has the highest accuracy.
```
best_run = hdr.get_best_run_by_primary_metric()
print(best_run.get_details()['runDefinition']['arguments'])
```
Now let's list the model files uploaded during the run.
```
print(best_run.get_file_names())
```
We can then register the folder (and all files in it) as a model named `keras-dnn-mnist` under the workspace for deployment.
```
model = best_run.register_model(model_name='keras-mlp-mnist', model_path='outputs/model')
```
## Deploy the model in ACI
Now we are ready to deploy the model as a web service running in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in.
### Create score.py
First, we will create a scoring script that will be invoked by the web service call.
* Note that the scoring script must have two required functions, `init()` and `run(input_data)`.
* In `init()` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.
* In `run(input_data)` function, the model is used to predict a value based on the input data. The input and output to `run` typically use JSON as serialization and de-serialization format but you are not limited to that.
```
%%writefile score.py
import json
import numpy as np
import os
from keras.models import model_from_json
from azureml.core.model import Model
def init():
global model
model_root = Model.get_model_path('keras-mlp-mnist')
# load json and create model
json_file = open(os.path.join(model_root, 'model.json'), 'r')
model_json = json_file.read()
json_file.close()
model = model_from_json(model_json)
# load weights into new model
model.load_weights(os.path.join(model_root, "model.h5"))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
def run(raw_data):
data = np.array(json.loads(raw_data)['data'])
# make prediction
y_hat = np.argmax(model.predict(data), axis=1)
return y_hat.tolist()
```
### Create myenv.yml
We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda packages `tensorflow` and `keras`.
```
from azureml.core.conda_dependencies import CondaDependencies
cd = CondaDependencies.create()
cd.add_tensorflow_conda_package()
cd.add_conda_package('keras==2.2.5')
cd.add_pip_package("azureml-defaults")
cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')
print(cd.serialize_to_string())
```
### Deploy to ACI
We are almost ready to deploy. Create the inference configuration and deployment configuration and deploy to ACI. This cell will run for about 7-8 minutes.
```
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
from azureml.core.model import Model
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
auth_enabled=True, # this flag generates API keys to secure access
memory_gb=1,
tags={'name': 'mnist', 'framework': 'Keras'},
description='Keras MLP on MNIST')
service = Model.deploy(workspace=ws,
name='keras-mnist-svc',
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(True)
print(service.state)
```
**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`
This is the scoring web service endpoint:
```
print(service.scoring_uri)
```
### Test the deployed model
Let's test the deployed model. Pick 30 random samples from the test set, and send it to the web service hosted in ACI. Note here we are using the `run` API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.
After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample.
```
import json
# find 30 random samples from test set
n = 30
sample_indices = np.random.permutation(X_test.shape[0])[0:n]
test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
test_samples = bytes(test_samples, encoding='utf8')
# predict using the deployed model
result = service.run(input_data=test_samples)
# compare actual value vs. the predicted values:
i = 0
plt.figure(figsize = (20, 1))
for s in sample_indices:
plt.subplot(1, n, i + 1)
plt.axhline('')
plt.axvline('')
# use different color for misclassified sample
font_color = 'red' if y_test[s] != result[i] else 'black'
clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
plt.text(x=10, y=-10, s=y_test[s], fontsize=18, color=font_color)
plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
i = i + 1
plt.show()
```
We can retrieve the API keys used for accessing the HTTP endpoint.
```
# Retrieve the API keys. Two keys were generated.
key1, Key2 = service.get_keys()
print(key1)
```
We can now send construct raw HTTP request and send to the service. Don't forget to add key to the HTTP header.
```
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test)-1)
input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
#print("input data:", input_data)
print("label:", y_test[random_index])
print("prediction:", resp.text)
```
Let's look at the workspace after the web service was deployed. You should see
* a registered model named 'keras-mlp-mnist' and with the id 'model:1'
* a webservice called 'keras-mnist-svc' with some scoring URL
```
model = ws.models['keras-mlp-mnist']
print("Model: {}, ID: {}".format('keras-mlp-mnist', model.id))
webservice = ws.webservices['keras-mnist-svc']
print("Webservice: {}, scoring URI: {}".format('keras-mnist-svc', webservice.scoring_uri))
```
## Clean up
You can delete the ACI deployment with a simple delete API call.
```
service.delete()
```
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<p>Le code de ce cahier a été masqué par défaut pour faciliter la lisibilité
Pour basculer entre les modes, appuyez <a href="javascript:code_toggle()">ici</a>.</p>
<p>Pour (re) lancer pyÉtude, appuyez <a href="javascript:IPython.notebook.restart_run_all()">ici</a>.</p>''')
HTML('''
<h1>pyÉtude</h1><h2>Jupyter Notebook - v3.1.0</h2><h4><a href="https://github.com/BourgonLaurent/pyEtude">Voir le projet sur GitHub</a></h4><h5><a href="https://github.com/BourgonLaurent/pyEtude/blob/master/LICENSE"target="_blank">Sous la licence MIT</a></h5>''')
import json
with open("pyEtude.json") as json_f:
jsonData = json.load(json_f)
matieres = jsonData["matieres"]
matieres_list = [f"{mat_name} - {matieres[mat_name][0]}" for mat_name in matieres]
matieres_list_solo = [mat_name for mat_name in matieres]
from IPython.display import Javascript, display, FileLink
from ipywidgets import widgets
info_widgets = {}
info_widgets["titre"] = widgets.Text(
value='',
placeholder='Les Lois de Newton',
description='Titre:'
)
info_widgets["sous_titre"] = widgets.Text(
value='',
placeholder='Newton, grand physicien',
description='Sous-Titre:'
)
info_widgets["matiere"] = dict()
info_widgets["matiere"]["drop"] = widgets.Dropdown(
value="Physique - PHY",
options=matieres_list,
description='Matière:',
disabled=False,
)
info_widgets["matiere"]["check"] = widgets.Checkbox(
value=False,
description='Personnaliser:',
disabled=False,
indent=True
)
info_widgets["matiere"]["entry"] = widgets.Text(
value='',
placeholder='PHY',
description='',
disabled=True
)
def checkbox_checked(update):
state = info_widgets["matiere"]["check"].value
if state == True:
info_widgets["matiere"]["drop"].disabled = True
info_widgets["matiere"]["entry"].disabled = False
elif state == False:
info_widgets["matiere"]["drop"].disabled = False
info_widgets["matiere"]["entry"].disabled = True
info_widgets["matiere"]["check"].observe(checkbox_checked)
info_widgets["numero"] = dict()
info_widgets["numero"]["combo"] = widgets.Combobox(
value="CHP",
options=['CHP', 'MOD'],
description='Numéro:',
ensure_option=True,
disabled=False
)
info_widgets["numero"]["intext"] = widgets.BoundedIntText(
value=1,
step=1,
disabled=False
)
info_widgets["section"] = widgets.Text(
value='',
placeholder="La Loi de l'Inertie",
description='1ʳᵉ Section:'
)
info_widgets["generate"] = widgets.Button(
description='Générer',
disabled=False,
button_style='warning', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Générer le document',
icon='file' # (FontAwesome names without the `fa-` prefix)
)
def generate_clicked(update):
info_widgets["titre"].disabled = True
info_widgets["sous_titre"].disabled = True
info_widgets["matiere"]["drop"].disabled = True
info_widgets["matiere"]["check"].disabled = True
info_widgets["matiere"]["entry"].disabled = True
info_widgets["numero"]["combo"].disabled = True
info_widgets["numero"]["intext"].disabled = True
info_widgets["section"].disabled = True
info_widgets["generate"].disabled = True
info_widgets["generate"].description = 'Génération...'
info_widgets["generate"].button_style = 'info'
info_widgets["generate"].icon = 'ellipsis-h'
status = generate_document()
if status == True:
info_widgets["generate"].description = 'Généré!'
info_widgets["generate"].button_style = 'success'
info_widgets["generate"].icon = 'check'
info_widgets["generate"].on_click(generate_clicked)
import os, zipfile, locale
class Document:
def __init__(self, titre, soustitre, auteur, niveau, matiere, numero, section, model, filepath):
self.titre = titre
self.soustitre = soustitre
self.auteur = auteur
self.niveau = niveau
self.matiere = matiere
self.numero = numero
self.section = section
self.model = model
self.filepath = filepath
self.options = {"pyETUDE_Titre": titre,
"pyETUDE_SousTitre": soustitre,
"pyETUDE_Matiere": matiere,
"pyETUDE_Auteur": auteur,
"pyETUDE_Niv": niveau,
"pyETUDE_Num": numero}
self.sections = {"sectionpy": section}
self.folder = f"{matiere}-{numero}_tmpyETUDE"
self.main()
def main(self):
self.exportWord(self.model, self.folder)
self.modifyOptions(os.path.join(self.folder, "word", "document.xml"), self.options)
self.modifyOptions(os.path.join(self.folder, "word", "header1.xml"), self.options)
self.modifyOptions(os.path.join(self.folder, "word", "footer1.xml"), self.options)
self.modifyOptions(os.path.join(self.folder, "word", "document.xml"), self.sections)
self.packWord(self.folder, self.filepath)
self.cleanTemp(self.folder)
def exportWord(self, model:str, folder:str) -> str:
"""## Extract the specified `.zip` file
### Arguments:\n
\tmodel {str} -- file that will be extracted (Do not forget the .docx!)
\tfolder {str} -- folder that will receive the extracted file
### Returns:\n
\tstr -- the name of the folder where it was extracted
"""
with zipfile.ZipFile(model, "r") as model_file:
model_file.extractall(folder)
model_file.close()
return folder
def modifyOptions(self, path:str, info:dict) -> str:
"""## Send command to {self.searchAndReplace} for all values in a dictionnary
### Arguments:\n
\tpath {str} -- The path of the file to be modified
\tinfo {dict} -- A dictionnary in this format {"to search":"to replace"}
### Returns:\n
\tstr -- Returns the name of the file modified
"""
for search, replace in info.items():
self.searchAndReplace(path, search, replace)
return path
def searchAndReplace(self, infile:str, search:str, replace:str) -> str:
"""## Search the specified file with the keyword given and replaces it with the third argument
### Arguments:\n
\tinfile {str} -- The file to change
\tsearch {str} -- The word to replace
\treplace {str} -- The word that will be replaced by {search}
### Returns:\n
\tstr -- Returns the name of the file modified
"""
if os.path.isfile(infile):
with open(infile, "r", encoding="utf8") as in_f:
data_f = in_f.read()
with open(infile, "w", encoding='utf8') as out_f:
out_f.write(data_f.replace(search, replace))
else:
raise FileNotFoundError
return infile
def packWord(self, folder:str, final:str) -> str:
"""## Zip the folder specified
This will only zip the contents of the folder, not the base folder
### Arguments:\n
\tfolder {str} -- the folder that will be zipped
\tfinal {str} -- the name of the archive (Do not forget the .docx!)
### Returns:\n
\tstr -- the name of the zip file that was created
"""
locale.setlocale(locale.LC_ALL, (None, None)) # Fix compatibility with locale
with zipfile.ZipFile(final, "w", compression=zipfile.ZIP_DEFLATED) as zip_file:
for root, dirs, files in os.walk(folder): # pylint: disable=unused-variable
# zip_file.write(os.path.join(root, "."))
for File in files:
filePath = os.path.join(root, File)
inZipPath = filePath.replace(folder, "", 1).lstrip("\\/")
zip_file.write(filePath, inZipPath)
#print(f"\nLe document a été créé: {self.filepath}") # Si démarré à partir de l'invite de commande
return final
def cleanTemp(self, folder:str) -> str:
"""## Clean the temporary folder
DANGEROUS: THIS WILL DELETE ALL THE FILES IN THE SPECIFIED FOLDER!!
### Arguments:\n
\tfolder {str} -- The folder that will be deleted
### Raises:\n
\tNotADirectoryError: The specified folder is not a folder
### Returns:\n
\tstr -- Returns the name of the folder deleted
"""
if os.path.isdir(folder):
for root, dirs, files in os.walk(folder, topdown=False):
for File in files:
os.remove(os.path.join(root, File))
for Dir in dirs:
os.rmdir(os.path.join(root, Dir))
os.rmdir(folder)
else:
raise NotADirectoryError
return folder
def generate_document():
global jsonData, matieres, filename
titre = info_widgets["titre"].value
if titre == "":
titre = info_widgets["titre"].placeholder
sous_titre = info_widgets["sous_titre"].value
if sous_titre == "":
sous_titre = info_widgets["sous_titre"].placeholder
if info_widgets["matiere"]["drop"].disabled == False:
matiere = matieres[matieres_list_solo[info_widgets["matiere"]["drop"].index]][0]
else:
matiere = info_widgets["matiere"]["entry"].value
if matiere == "":
matiere = info_widgets["matiere"]["entry"].placeholder
numero = info_widgets["numero"]["combo"].value + str(info_widgets["numero"]["intext"].value)
filename = f"{matiere}-{numero}.docx"
section = info_widgets["section"].value
if section == "":
section = info_widgets["section"].placeholder
auteur = jsonData["auteur"]
niveau = jsonData["niveau"]
model = "model.docx"
filepath = os.path.join(os.getcwd(), filename)
Document(titre, sous_titre, auteur, niveau, matiere, numero, section, model, filepath)
display(FileLink(filename, result_html_prefix="Cliquez ici pour télécharger votre document généré: "))
return True
# Invisible space ("braille"): widgets.Label("⠀")
widgets.GridBox((info_widgets["titre"], info_widgets["sous_titre"], widgets.Label("⠀"),
widgets.Label("⠀"), widgets.Label("⠀"), widgets.Label("⠀"),
info_widgets["matiere"]["drop"], info_widgets["matiere"]["check"], info_widgets["matiere"]["entry"],
info_widgets["numero"]["combo"], info_widgets["numero"]["intext"], widgets.Label("⠀"),
widgets.Label("⠀"), widgets.Label("⠀"), widgets.Label("⠀"),
info_widgets["section"], widgets.Label("⠀"), widgets.Label("⠀"),
widgets.Label("⠀"), widgets.Label("⠀"), widgets.Label("⠀"),
info_widgets["generate"]),
layout=widgets.Layout(grid_template_columns="repeat(3, 300px)"))
```
| github_jupyter |
# Build your Own Evaluation Framework
In this lab exercise we're going to build our own model evaluation framework. Pretty much every machine learning experiment follows the same template
1. Load a dataset and split into train and test sets
2. Create a model and train it on your training data
3. Predict the labels for the test data and compare with the actual labels
4. Record whatever evaluation metrics you are using
For this example we're going to use the wine dataset [available from the UCI machine learning repository](https://archive.ics.uci.edu/ml/datasets/wine+quality). The csv file for this dataset is available in Brightspace.
```
import pandas as pd
import numpy as np
import math
df = pd.read_csv('wine.csv')
```
## Splitting our data into X and y
The first thing we want to do when we load up a dataset is separate the X and Y data. If we accidentally leave our labels in the dataset when we train our models we'll get incredible results! They won't hold up in the real world though!
The pandas **pop()** method lets us extract a column from a dataset. We're using pop() here to pull the wine column out separately as our label column. Unlike most python functions, the **pop()** method has a side effect; as well as returning the column to us, it removes it from the original dataframe. Python usually avoids functions like this because it can be difficult to see what's happening with them, but popping the label column is such a common use case for machine learning that it's survived here.
```
X = df.copy()
y = X.pop('Wine').values
print("X Data")
print(X)
print("Labels")
print(y)
```
## Splitting our Data into Train and Test
If we're going to evaluate our model we need one portion of the data to train our model and another portion of the data to test it. Sci-kit-learn provides a convenience function to do this for us but we're going to have a go at implementing this functionality ourselves first.
We want to write a function which takes an dataframe and an array of labels, and returns
* 2 dataframes, one training set with X% of the data and one test set with the remainder
* 2 label arrays, one training set with X% of the data and one test set with the remainder
Our function takes a propotion between 0 and 1 and splits the training and test set accordingly. Our first job is to work out how many rows belong to the training set
```
def split_data(X: pd.DataFrame, y: np.array, train_proportion: float):
# We use floor here (or ceiling) to ensure that we take a whole number of rows
num_train = math.floor(len(X) * train_proportion)
# Using [start:end] indexing, this takes all rows from 0 up to num_train (exclusive)
X_train = X.iloc[:num_train,]
# For our test set we'll take everything from num_train up to the end of the dataframe
X_test = X.iloc[num_train:,]
# Do the same with the y-data (note this is just a regular array and so we don't need .iloc
# we can index it directly)
y_train = y[:num_train]
y_test = y[num_train:]
# Use the comma operator to return all 4 values from our function
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = split_data(X, y, 0.8)
```
The function above returns the right number of instances for train and test, but we're just taking the first N rows of the dataframe. This can be a problem, as dataframes are often ordered by the label, so our model might be missing a substantial number of rows from one class. It's always important to ensure that we randomize before sampling.
We can use the numpy **random.shuffle()** to randomly shuffle our array before splitting it. This will make sure that we select random rows from our dataframe
```
from sklearn.utils import shuffle
def split_data(X: pd.DataFrame, y: np.array, train_proportion: float):
# It's important to make a copy here.
# Check what happens if you don't do it. Shuffle the y array rather than a copy
# of it and print the contents of y before and after calling this function
X_shuffle = shuffle(X.copy())
y_shuffle = shuffle(y.copy())
# We use floor here (or ceiling) to ensure that we take a whole number of rows
num_train = math.floor(len(X) * train_proportion)
# Using [start:end] indexing, this takes all rows from 0 up to num_train (exclusive)
X_train = X_shuffle.iloc[:num_train,]
# For our test set we'll take everything from num_train up to the end of the dataframe
X_test = X_shuffle.iloc[num_train:,]
# Do the same with the y-data (note this is just a regular array and so we don't need .iloc
# we can index it directly)
y_train = y_shuffle[:num_train]
y_test = y_shuffle[num_train:]
# Use the comma operator to return all 4 values from our function
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = split_data(X, y, 0.8)
```
We're now making sure that our choice of train and test is properly randomized. However, because we've called shuffle twice on two separate arrays, these arrays will no longer correspond to each other and our labels will all be wrong.
Whenever we generate a random sequence of numbers, we can ensure that we get the same sequence across multiple calls by supplying a **random seed** or **random state**. Numpy provides its own RandomState class. We're going to use this to make sure we have the same result when shuffling both the X and y data. We'll allow the caller to pass in a random seed when they call the function
```
from numpy.random import RandomState
from sklearn.utils import shuffle
def split_data(X: pd.DataFrame, y: np.array, train_proportion: float, random_seed: int):
# It's important to make a copy here.
# Check what happens if you don't do it. Shuffle the y array rather than a copy
# of it and print the contents of y before and after calling this function
rs = RandomState(random_seed)
X_shuffle = shuffle(X.copy(), random_state=rs)
# reset the random state so we get the same result from shuffle
rs = RandomState(random_seed)
y_shuffle = shuffle(y.copy(), random_state=rs)
# We use floor here (or ceiling) to ensure that we take a whole number of rows
num_train = math.floor(len(X) * train_proportion)
# Using [start:end] indexing, this takes all rows from 0 up to num_train (exclusive)
X_train = X_shuffle.iloc[:num_train,]
# For our test set we'll take everything from num_train up to the end of the dataframe
X_test = X_shuffle.iloc[num_train:,]
# Do the same with the y-data (note this is just a regular array and so we don't need .iloc
# we can index it directly)
y_train = y_shuffle[:num_train]
y_test = y_shuffle[num_train:]
# Use the comma operator to return all 4 values from our function
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = split_data(X, y, 0.8, 13)
```
If we prefer we can use scikit-learn to do the job of splitting the dataset for us. It's often a good idea to let the libraries do the hard work, but it's important to know how to implement something yourself if your needs aren't quite met by the library.
```
from sklearn.model_selection import train_test_split
from numpy.random import RandomState
random_seed = 13
rs = RandomState(random_seed)
# train_test_split() expects the X and y parameters to correspond to each other, meaning
# that the first value in y is the label for the first row in X. This function will
# ensure that corresponding values aren't changed.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=rs)
def get_shape(df: pd.DataFrame):
return f"{df.shape[0]} rows, {df.shape[1]} columns"
print(f"X Train: {get_shape(X_train)}")
print(f"X Test: {get_shape(X_test)}")
print(f"y Train: {len(y_train)} rows")
print(f"y Test: {len(y_test)} rows")
```
## Putting the Dataset Together
We've seen that in order to get a dataset ready for machine learning we need to
* Read the dataset from a file
* split off the label column
* split the data into train and test
This is generally going to be the case for any type of dataset so we can make our lives much easier by creating a function to do this work for us. The function below takes everything we've done so far and puts it together.
```
def load_dataset(filepath: str, label_column: str, train_proportion: float, random_seed: int):
df = pd.read_csv(filepath)
label = df.pop(label_column)
X_train, X_test, y_train, y_test = split_data(df, label, train_proportion, random_seed)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = load_dataset('wine.csv', 'Wine', 0.8, 13)
```
## Training and Evaluating the Model
Now that we've split our data into X and y and train and test we're ready to train and evaluate our model. No matter what model we're evaluating or what dataset we're using, the steps here will always be the same.
1. Train the model using X_train and y_train
2. Make a prediction for each item in X_test
3. Compare the predictions (y_pred) with the actual labels (y_test) and calculate metrics
```
from sklearn.tree import DecisionTreeClassifier
# Create the model
model = DecisionTreeClassifier()
# To train the model we pass in both the data and the labels
model.fit(X_train, y_train)
# We ask the model to predict labels for each of our test rows
y_pred = model.predict(X_test)
# The numpy equal function takes 2 arrays and for each element returns true if the
# corresponding elements are equal, false otherwise. If y_pred[i] == y_test[i] then
# the model was correct
correct = np.equal(y_pred, y_test)
```
The total number of correct predictions isn't very useful on its own. At a minimum we'll want to find the percentage of predictions which were correct (*i.e.* the misclassification rate). The following function will calculate the misclassification from looking at y_pred and y_test
```
def get_misclassification_rate(y_pred, y_test):
correct = np.equal(y_pred, y_test)
# By summing a boolean array we count the number of True values
total_correct = sum(correct)
# Getting the length gives us the total number of predictions made
total_predictions = len(correct)
# Formular for misclassification rate
return total_correct / total_predictions
get_misclassification_rate(y_pred, y_test)
```
## Making the Code Reusable
Every evaluation is going to look the same, so by writing a function to train and evaluate the model we can make it very easy for ourselves to compare additional models. In order to train and evaluate a model we'll need
* X_train
* X_test
* y_train
* y_test
* a model
Let's create a function taking each of these as a parameter and returning the misclassification
```
def get_misclassification_rate(y_pred, y_test):
correct = np.equal(y_pred, y_test)
# By summing a boolean array we count the number of True values
total_correct = sum(correct)
# Getting the length gives us the total number of predictions made
total_predictions = len(correct)
# Formular for misclassification rate
return total_correct / total_predictions
# There's no easy way to specify a type for all sklearn models so we use the keyword **any**, meaning
# any type is allowed here. However, in order for the function to work the model will need to have a .fit() method and a .predict() method
def evaluate_model(X_train: pd.DataFrame, X_test: np.array, y_train: pd.DataFrame, y_test: np.array, model: any) -> float:
# To train the model we pass in both the data and the labels
model.fit(X_train, y_train)
# We ask the model to predict labels for each of our test rows
y_pred = model.predict(X_test)
return get_misclassification_rate(y_pred, y_test)
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
models = [LogisticRegression(max_iter=200000), DecisionTreeClassifier()]
X_train, X_test, y_train, y_test = load_dataset('wine.csv', 'Wine', 0.8, 13)
for model in models:
# we're digging into the SKLearn model to get its name
print(f"{type(model).__name__}: {evaluate_model(X_train, X_test, y_train, y_test, model)}")
```
| github_jupyter |
# Extraction of Nibelungen texts
It is really hard to find Nibelungenlied texts online in a form that is:
- free access
- made by scholars
- easily parsable
**Our aim is to make semantic analysis from Nibelungen texts and Völsunga saga.**
I found PDF files from [Universität Wien](https://www.univie.ac.at/nibelungenwerkstatt/) which contained only the raw texts, however, parsing them is unfeasible because no spaces remained after extraction and tokenizing words is too hard for Mittelhochdeutsch.
I found HTML files from [Augsburg Hochschule](https://www.hs-augsburg.de/~harsch/germanica/Chronologie/d_chrono.html) and it was easier to extract them.
## From PDF
```
link_nibelungen = "https://www.univie.ac.at/nibelungenwerkstatt/files/wrkst_codices.zip"
```
The zip file is downloaded.
```
import requests
r = requests.get(link_nibelungen)
with open(link_nibelungen.split("/")[-1], "wb") as f:
f.write(r.content)
```
The zip file is unzipped.
```
import zipfile
with zipfile.ZipFile(link_nibelungen.split("/")[-1], "r") as f:
f.extractall(".")
```
PDF files are read and texts are xtracted.
```
import PyPDF2
l_text = []
with open("gr-A_nib.pdf", "rb") as f:
pdf_reader = PyPDF2.PdfFileReader(f)
print("There are "+str(pdf_reader.getNumPages())+" pages.")
for page_index in range(pdf_reader.getNumPages()):
page = pdf_reader.getPage(page_index)
l_text.append(page.extractText())
print(repr(l_text[1]))
```
Words are not tokenized so it's useless for what we want.
## From HTML
```
import requests
from bs4 import BeautifulSoup
main_links = [
"https://www.hs-augsburg.de/~harsch/germanica/Chronologie/12Jh/Nibelungen/nib_c_00.html",
"https://www.hs-augsburg.de/~harsch/germanica/Chronologie/12Jh/Nibelungen/nib_b_00.html",
"https://www.hs-augsburg.de/~harsch/germanica/Chronologie/12Jh/Nibelungen/nib_a_00.html",
"https://www.hs-augsburg.de/~harsch/germanica/Chronologie/12Jh/Nibelungen/nib_n_00.html"
]
n_pages = 39
```
#### Making links
```
def int_to_string(i):
if 0 <= i < 10:
return "0"+str(i)
else:
return str(i)
links = {}
for link in main_links:
links[link] = []
for i in range(n_pages+1):
link.split("/")
links[link].append("/".join(link.split("/")[:-1])+"/"+
link.split("/")[-1].split(".")[0][:-2]+int_to_string(i)+".html")
```
#### Retrieving parts
```
import time
texts = {}
for link in links:
texts[link] = []
for page_link in links[link]:
r = requests.get(page_link)
time.sleep(1)
texts[link].append(r.content)
```
#### Saving part
```
import os
for main_link in main_links:
directory = main_link.split("/")[-1].split(".")[0]
if not os.path.exists(directory):
os.mkdir(directory)
for i, text in enumerate(texts[main_link]):
filename = os.path.join(directory, str(i)+".html")
with open(filename, "w") as f:
f.write(text.replace(b"s\x8d", b"i").decode("utf-8"))
```
#### Reading part
```
retrieved_texts = {}
for main_link in main_links:
directory = main_link.split("/")[-1].split(".")[0]
retrieved_texts[main_link] = []
for i, text in enumerate(texts[main_link]):
filename = os.path.join(directory, str(i)+".html")
with open(filename, "r") as f:
text = f.read()
tree = BeautifulSoup(text, "lxml")
retrieved_texts[main_link].append(tree)
print(retrieved_texts[main_links[0]][1].text)
```
```verbatim
We select texts between the first <h4> and teh first <<<< occurrences.
```
```
def extract_text(html_text):
lines = [i.text.replace("\xa0", "") for i in html_text.find("div", attrs={"class": "contentus"}).findAll("h3")]
return [line.split(" ") for line in lines]
print(repr(extract_text(retrieved_texts[main_links[0]][1])[0][0]))
```
#### Extracted text
```
import codecs
for main_link in main_links:
directory = "extracted_"+main_link.split("/")[-1].split(".")[0][:-3]
if not os.path.exists(directory):
os.mkdir(directory)
for i, text in enumerate(retrieved_texts[main_link]):
filename = os.path.join(directory, str(i)+".txt")
extracted_text = extract_text(text)
if len(extracted_text) > 0:
with codecs.open(filename, mode="w", encoding="utf-8") as f:
lines = ["\t".join(line) for line in extracted_text]
final_text = "\n".join(lines)
f.write(final_text)
```
| github_jupyter |
# Store all from category Beruf
```
#store pages from category Beruf
from wikitools import wiki, api,category
import json
def seen_wiki_categories(lang,category_page,overloop=0,n=6):
global seen_categories_
global seen_pages_
where_go_categories_=[]
#print "We are here: "+ category_page
seen_categories_.append(category_page)
site = wiki.Wiki("http://"+lang+".wikipedia.org/w/api.php")
cat = category.Category(site, category_page) # Create object for "Category:xxxxx"
for article in cat.getAllMembersGen(titleonly=True,namespaces=[0]): # iterate through all the pages in ns 0
if article not in seen_pages_ :
seen_pages_.append(article)
else:
continue
for subcategory in cat.getAllMembersGen(titleonly=True,namespaces=[14]):
if subcategory not in seen_categories_ :
where_go_categories_.append(subcategory)
else:
continue
for c in where_go_categories_ :
if c not in seen_categories_:
if overloop<n:
seen_wiki_categories(lang,c,overloop=overloop+1,n=n)
else:
print "overloop ==="+c
pass
seen_categories_=[]
seen_pages_=[]
seen_wiki_categories("de","Kategorie:Beruf",overloop=0,n=3)
seen_pages_=list(set(seen_pages_))
print len(seen_pages_)
with open('de/wiki/berufe_wiki4.json', 'w') as f:
json.dump(seen_pages_, f, indent=4)
```
# Levenshtein distance
```
#compare seen_categories_ and beruf
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
words=load_simple_json('de/occupation_all.json')
wiki_pages=load_simple_json('de/wiki/berufe_wiki4.json')
print len(words)#4240#4239#4252
both=[]
for i in words:
if i in wiki_pages:
both.append(i)
if words[i][1] in wiki_pages:
both.append(i)
print "we have ",len(both),"wiki pages from category Beruf and our list of occupations"#640 and 648(+for feminine)#710#708#711
both2={}
for i in words:
for j in wiki_pages:
if i in j:
if i not in both2.keys():
both2[i]=[j]
else:
both2[i].append(j)
#both2[i]=[a]
#if words[i][1] in j:
# both2.append(i)
print len(both2)#800#799#802#801
m_no_page=load_simple_json('de/wiki/m_no_page.json')
#match no_page and wiki_berufe4
both3={}
for i in m_no_page:
for j in wiki_pages:
if i in j:
if i not in both3.keys():
both3[i]=[j]
else:
both3[i].append(j)
print "# of Masculine_no_pages in list of pages from category Beruf (substring inclusion):",len(both3)#17#17#12#16
#match wiki_berufe4 and no_page
both4={}
for i in m_no_page:
for j in wiki_pages:
if j in i:
if j not in both4.keys():
both4[j]=[i]
else:
both4[j].append(i)
print "# of Masculine_no_pages in list of pages from category Beruf (substring inclusion reversed):",len(both4)#206#209#204#209
#levenshtein and string inclusion
import Levenshtein
print "With Levenshtein distance<=2 or ratio>0.8"
m_page=load_simple_json('de/wiki/m_page.json')
page_m=m_page
both6={}
k=0
m=0
for i in m_no_page:
for j in wiki_pages:
if i in j:
if i not in both6.keys():
both6[i]=[j]
else:
both6[i].append(j)
elif Levenshtein.ratio(i, j)>0.8:
print "===>",i,"==",j
m+=1
if i not in both6.keys():
both6[i]=[j]
else:
both6[i].append(j)
elif Levenshtein.distance(i, j)<=2:
k+=1
print i,":",j
if i not in both6.keys():
both6[i]=[j]
else:
both6[i].append(j)
print k#30#30#30
print m#498#518#517
#lets check both3 values
page_m=load_simple_json('de/wiki/m_page.json')
page_f=load_simple_json('de/wiki/f_page.json')
m_links_to_feminine=load_simple_json('de/wiki/m_links_to_feminine.json')
m_links_to_smth=load_simple_json('de/wiki/m_links_to_smth.json')
print "======Masculine_no_pages in list of pages from category Beruf (substring inclusion):======",len(both3)
for i in both3:
for j in both3[i]:
if j in m_page:
print "!!!already in pages m",j
elif j in m_links_to_feminine:
print "!!!in m links to feminine",j
elif j in m_links_to_smth:
print "!!!in m links to smth",j
elif j in m_links_to_smth.values():
print "!!!in m links to smth_values",j
elif j in page_f:
print "!!! in feminine pages",j
elif i in both6.keys():
print "=== in levenstein values:",i," :",both6[i],both3[i]
else:
print "++Rest:",i," :",j
from collections import defaultdict
m_page=load_simple_json('de/wiki/m_page.json')
m_links_to_feminine=load_simple_json('de/wiki/m_links_to_feminine.json')
m_links_to_feminine_lev=load_simple_json('de/wiki/m_links_to_feminine_lev.json')
m_links_to_smth=load_simple_json('de/wiki/m_links_to_smth.json')
substr_from_both4=defaultdict(list)
for i in both4:
for j in both4[i]:
if j in m_page:
print "!!!already in pages",j
elif j in m_links_to_feminine:
print "!!!links to feminine",j
elif j in m_links_to_smth:
print "!!!links to smth",j
elif j in m_links_to_smth.values():
print "!!!links to smth_values",j
elif j in page_f:
print "!!!links to feminine",j
if i in m_page:
pass
elif i in m_links_to_smth.keys():
pass
elif i in m_links_to_feminine:
pass
elif i in m_links_to_feminine_lev.keys():
pass
elif i in m_links_to_smth.values():
print i
else:
substr_from_both4[i]=both4[i]
#print i
#else:
# print i
with open('de/wiki/berufe_substring_19.01.16.json', 'w') as f: #m_no_page:[wiki_pages_in_beruf_category_withsubstring_inclusion]
json.dump(substr_from_both4, f, indent=4)
def check_pages_in_list(both6):
new_both6={}
for i in both6:
if i not in new_both6.keys():
new_both6[i]=both6[i]
for j in both6[i]:
if j in page_m:
index=int(both6[i].index(j))
new_both6[i][index]=both6[i][index],1 #1=page exist
elif j in m_links_to_feminine:
index=int(both6[i].index(j))
#new_both6[i]=both6[i]
new_both6[i][index]=both6[i][index],2 #2=redirects to feminine
elif j in m_links_to_smth:
index=int(both6[i].index(j))
#new_both6[i]=both6[i]
new_both6[i][index]=both6[i][index],3 #3=redirects to smth
elif j in m_links_to_smth.values():
index=int(both6[i].index(j))
#new_both6[i]=both6[i]
new_both6[i][index]=both6[i][index],4 #4=redirects to smth value
else:
index=int(both6[i].index(j))
#new_both6[i]=both6[i]
new_both6[i][index]=both6[i][index],0 #no page
return new_both6
new_both6=check_pages_in_list(both6)
with open('de/wiki/berufe_levenshtein_.json', 'w') as f: #m_no_page:[wiki_pages_in_beruf_category]
json.dump(new_both6, f, indent=4)
with open('de/wiki/berufe_substring_.json', 'w') as f: #m_no_page:[wiki_pages_in_beruf_category_withsubstring_inclusion]
json.dump(both4, f, indent=4)
```
# Check all the values manually:
Read all values from files berufe_levenstein and berufe_substring_ and then manually identify wheather those professions correspond to professions in list all_occupations. Encoding of manual assignment: 99 - the same profession as in our list, but with different writing (always masculine); 100 - neutral form of profession, e.g. name of field ( "Audiodesign"); 98 - female profesion corespondent to existent male profesion (We found no evidence). Then we save data in the file berufe_levenshtein_3105.json
# Read all with mark 99 and 100 and add 99 into list of existed pages
print all neutral forms and then save them into the file n_page_levenshtein.json
save assigned new names of male profesion names into the file m_page_levenshtein.json
creat and save correspondent female labels for found professions into the file m_page_levenshtein_feminine_form.json . We will use female labels in order to check wheather they exist on wikipedia too.
```
lev_corrected=load_simple_json('de/wiki/berufe_levenshtein_19.01.16.json')#old was ..._3105.json
new_page_m={}
new_page_n={}
for i in lev_corrected:
for j in lev_corrected[i]:
if j[1]==99:
new_page_m[i]=j[0]
elif j[1]==98:
print i,"==",j[0], "?????"
elif j[1]==100:
print i, "is =====>",j[0],"Neutral?"
new_page_n[i]=j[0]
print len(new_page_m) , len(new_page_n)
with open('de/wiki/m_page_levenshtein.json', 'w') as f:
json.dump(new_page_m, f, indent=4)
with open('de/wiki/m_neutral_lev.json', 'w') as f:
json.dump(new_page_n, f, indent=4)
```
# Retrieve values from berufe_substrin_...-cor.json
```
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
berufe_substring=load_simple_json('de/wiki/berufe_substring_19.01.16_cor.json')
m_no_page=load_simple_json('de/wiki/m_no_page.json')
m_page_levenshtein=load_simple_json('de/wiki/m_page_levenshtein.json')
print "We had ",len(m_no_page),len(m_page_levenshtein)
for i in berufe_substring:
if 1 in berufe_substring[i]:
print i,berufe_substring[i][0]
m_page_levenshtein[berufe_substring[i][0]]=i
if berufe_substring[i][0] in m_no_page:
m_no_page.remove(berufe_substring[i][0])
print "We have now ",len(m_no_page),len(m_page_levenshtein)
with open('de/wiki/m_no_page.json', 'w') as f:
json.dump(m_no_page, f, indent=4)
with open('de/wiki/m_page_levenshtein.json', 'w') as f:
json.dump(m_page_levenshtein, f, indent=4)
```
# Build feminine form of words
```
new_page_m=load_simple_json('de/wiki/m_page_levenshtein.json')
new_page_m_feminine_form={}
for i in new_page_m:
if i=="Syndikus-Anwalt":
new_page_m_feminine_form[i]="Syndika"
elif i=="Wekschutzleiter":
new_page_m_feminine_form[i]="Wekschutzleiterin"
#Werkschutzin?????
elif len(new_page_m[i].split(" "))==1:
if new_page_m[i][-4:]!="mann":
new_page_m_feminine_form[i]=new_page_m[i]+"in"
else:
new_page_m_feminine_form[i]=new_page_m[i].replace("mann","frau")
else:
if ("Verwaltungsbetriebswirt"not in new_page_m[i])&("Fachberater"not in new_page_m[i]):
new_page_m_feminine_form[i]=new_page_m[i].replace("er ","e ")+"in"
else:
m=new_page_m[i].split(" ")
new_page_m_feminine_form[i]=new_page_m[i].replace(m[0],m[0]+"in ")
new_page_m_feminine_form #check this feminine pages in wiki?
with open('de/wiki/m_page_levenshtein_feminine_form.json', 'w') as f:
json.dump(new_page_m_feminine_form, f, indent=4)
```
## del exiting pages with levenstain dist from m_no_page
```
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
m_page_levenshtein=load_simple_json('de/wiki/m_page_levenshtein.json')
m_no_page=load_simple_json('de/wiki/m_no_page.json')
#n_page_levenshtein=load_simple_json('de/wiki/n_page_levenshtein.json')
print len(m_no_page)
for i in m_page_levenshtein:
if i in m_no_page:
m_no_page.remove(i)
print len(m_no_page)
with open('de/wiki/m_no_page.json', 'w') as f:
json.dump(m_no_page, f, indent=4)
```
# Check if page for feminine form exist
save existent redirections into f_links_to_mascuilne_lev.json
```
from bs4 import BeautifulSoup
from wikitools import wiki, api
import json
import re
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
def check_revision(revisions):
redir=""
page=False
for j in revisions:
redir_marker=["#WEITERLEITUNG","#REDIRECT","#redirect","#weiterleitung","#Weiterleitung","#Redirect"]
if ("#WEITERLEITUNG" in j["*"])|("#REDIRECT" in j["*"])|("#redirect" in j["*"])|("#weiterleitung" in j["*"])\
|("#Redirect" in j["*"])|("#Weiterleitung" in j["*"]):
m = re.search(r"\[\[(.+)\]\]", j["*"])
s = m.group(0)[2:].split('#', 1)[0].replace("]]",'')
redir=s
else:
page=True
return redir,page
new_page_m_feminine_form=load_simple_json('de/wiki/m_page_levenshtein_feminine_form.json')
redirection_f_new={}
page_f_new={}
no_page_f_new=[]
for i in new_page_m_feminine_form:
beruf_f=new_page_m_feminine_form[i]
l="de"
site = wiki.Wiki("http://"+l+".wikipedia.org/w/api.php")
#last revision
params={'action':'query','titles':beruf_f,'prop':'revisions','rvprop':'timestamp|content','format':'json','continue':''}
request = api.APIRequest(site, params)
result=request.query()
if result["query"]["pages"].values()[0].get("revisions"):
redir,page=check_revision(result["query"]["pages"].values()[0]["revisions"])
if redir:
redirection_f_new[i]=redir
if page:
page_f_new[i]=beruf_f
else:
no_page_f_new.append(beruf_f)
print page_f_new
print redirection_f_new
#check redirections
new_page_m=load_simple_json('de/wiki/m_page_levenshtein.json')
words=load_simple_json('de/occupation_all.json')
f_links_to_mascuilne_lev={}
for i in redirection_f_new:
if redirection_f_new[i]==new_page_m[i]:
f_links_to_mascuilne_lev[words[i][1]]={new_page_m_feminine_form[i]:redirection_f_new[i]}
else:
print i
with open('de/wiki/f_links_to_mascuilne_lev.json', 'w') as f:
json.dump(f_links_to_mascuilne_lev, f, indent=4)
```
No feminine page exist. All the redirectins were to masculine form.
## del from f_no_page all values from f_links_to_mascuilne_lev
```
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
f_links_to_mascuilne_lev=load_simple_json('de/wiki/f_links_to_mascuilne_lev.json')
f_no_page=load_simple_json('de/wiki/f_no_page.json')
#n_page_levenshtein=load_simple_json('de/wiki/n_page_levenshtein.json')
print len(f_no_page)
for i in f_links_to_mascuilne_lev:
if i in f_no_page:
f_no_page.remove(i)
print len(f_no_page)
with open('de/wiki/f_no_page.json', 'w') as f:
json.dump(f_no_page, f, indent=4)
```
## del from m_no_page and f_no_page where exist corespondent neutral pages?
```
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
m_no_page=load_simple_json('de/wiki/m_no_page.json')
f_no_page=load_simple_json('de/wiki/f_no_page.json')
n_page_levenshtein=load_simple_json('de/wiki/n_page_levenshtein.json')
words=load_simple_json('de/occupation_all.json')#{masculine:[[],[feminine]]}
print "We had:",len(m_no_page)
print "We had:", len(f_no_page)
for p in n_page_levenshtein:
if p in m_no_page:
m_no_page.remove(p)
else:
print p
if words[p][1] in f_no_page:
f_no_page.remove(words[p][1])
else:
print p
print "Now:",len(m_no_page)
print "Now:",len(f_no_page)
with open('de/wiki/m_no_page.json', 'w') as f:
json.dump(m_no_page, f, indent=4)
with open('de/wiki/f_no_page.json', 'w') as f:
json.dump(f_no_page, f, indent=4)
#TODO check the next and add f_links_to_mascuilne_with_link to the f_redirection value. DONE
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
m_links_to_feminine_lev={}
with open('de/wiki/m_links_to_feminine_lev.json', 'w') as f:
json.dump(m_links_to_feminine_lev, f, indent=4)
words=load_simple_json('de/occupation_all.json')#{masculine:[[],[feminine]]}
words_revert={words[i][1]:i for i in words}#feminine:masculine
m_links_to_feminine=load_simple_json('de/wiki/m_links_to_feminine.json')
m_links_to_feminine_lev=load_simple_json('de/wiki/m_links_to_feminine_lev.json')
m_links_to_smth=load_simple_json('de/wiki/m_links_to_smth.json')
f_links_to_mascuilne=load_simple_json('de/wiki/f_links_to_mascuilne.json')
f_links_to_smth=load_simple_json('de/wiki/f_links_to_smth.json')
#redirection_n=load_simple_json('de/wiki/n_redirects.json')
f_links_to_mascuilne_lev=load_simple_json('de/wiki/f_links_to_mascuilne_lev.json')
f_links_to_mascuilne_with_link=load_simple_json('de/wiki/f_links_to_mascuilne_with_link.json')
def merge_dicts(*dict_args):
'''
Given any number of dicts, shallow copy and merge into a new dict,
precedence goes to key value pairs in latter dicts.
'''
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
m_red={}
for m in m_links_to_feminine:
m_red[m]=words[m][1]
f_red={}
for f in f_links_to_mascuilne:
f_red[f]=words_revert[f]
m_redirection = merge_dicts(m_red, m_links_to_feminine_lev, m_links_to_smth)
f_redirection = merge_dicts(f_red,f_links_to_smth,f_links_to_mascuilne_lev,f_links_to_mascuilne_with_link)
with open('de/wiki/m_redirection.json', 'w') as f:
json.dump(m_redirection, f, indent=4)
with open('de/wiki/f_redirection.json', 'w') as f:
json.dump(f_redirection, f, indent=4)
```
# Table
```
from prettytable import PrettyTable
import json
def load_simple_json(filename):
with open(filename, 'r') as f:
return json.load(f)
words=load_simple_json('de/occupation_all.json')#{masculine:[[],[feminine]]}
words_revert={words[i][1]:i for i in words}#feminine:masculine
neutral=load_simple_json('de/neutral_cleaned.json')
no_page_m=load_simple_json('de/wiki/m_no_page.json')
no_page_f=load_simple_json('de/wiki/f_no_page.json')
no_page_n=load_simple_json('de/wiki/n_no_page.json')
page_m=load_simple_json('de/wiki/m_page.json')
page_f=load_simple_json('de/wiki/f_page.json')
page_n=load_simple_json('de/wiki/n_page.json')
m_page_levenshtein=load_simple_json('de/wiki/m_page_levenshtein.json')
#n_page_levenshtein=load_simple_json('de/wiki/n_page_levenshtein.json')
page_f_validated=load_simple_json('de/wiki/f_page_validated.json')
page_f_ambigious=load_simple_json('de/wiki/f_page_ambigious.json')
page_f_other=load_simple_json('de/wiki/f_page_other.json')
page_m_validated=load_simple_json('de/wiki/m_page_validated.json')
page_m_ambigious=load_simple_json('de/wiki/m_page_ambigious.json')
page_m_other=load_simple_json('de/wiki/m_page_other.json')
page_n_validated=load_simple_json('de/wiki/n_page_validated.json')
page_n_ambigious=load_simple_json('de/wiki/n_page_ambigious.json')
page_n_other=load_simple_json('de/wiki/n_page_other.json')
m_links_to_feminine=load_simple_json('de/wiki/m_links_to_feminine.json')
m_links_to_feminine_lev=load_simple_json('de/wiki/m_links_to_feminine_lev.json')
m_links_to_smth=load_simple_json('de/wiki/m_links_to_smth.json')
f_links_to_mascuilne=load_simple_json('de/wiki/f_links_to_mascuilne.json')
f_links_to_smth=load_simple_json('de/wiki/f_links_to_smth.json')
redirection_n=load_simple_json('de/wiki/n_redirects.json')
n_links_to_masculine=load_simple_json('de/wiki/n_links_to_masculine.json')
n_links_to_smth=load_simple_json('de/wiki/n_links_to_smth.json')
f_links_to_mascuilne_lev=load_simple_json('de/wiki/f_links_to_mascuilne_lev.json')
f_links_to_mascuilne_with_link=load_simple_json('de/wiki/f_links_to_mascuilne_with_link.json')
f_redirection=load_simple_json('de/wiki/f_redirection.json')
m_redirection=load_simple_json('de/wiki/m_redirection.json')
t = PrettyTable(['-de-',"Occup.", 'Wiki pages', "Validated pages","Ambigious pages","Redirects",
"Redir. to oposit gender"])
t.add_row(['Masculine',len(words), len(page_m)+len(m_page_levenshtein), len(page_m_validated)+len(m_page_levenshtein),
len(page_m_ambigious),
len(m_redirection), len(m_links_to_feminine)])
t.add_row(['Feminine',len(words_revert), len(page_f), len(page_f_validated), len(page_f_ambigious),
len(f_redirection),
len(f_links_to_mascuilne)+len(f_links_to_mascuilne_lev)])
t.add_row(['Neutral',len(neutral), len(page_n), len(page_n_validated), len(page_n_ambigious),
len(redirection_n), "to M:"+str(len(n_links_to_masculine))+", to F:0"])
print t
t2 = PrettyTable(['-de-',"Occup.", 'Wiki pages', "Redirects", "No page"])
t2.add_row(['Masculine',len(words), len(page_m)+len(m_page_levenshtein), len(m_redirection),
len(no_page_m)])
t2.add_row(['Feminine',len(words_revert), len(page_f),
len(f_redirection),
len(no_page_f)])
t2.add_row(['Neutral',len(neutral), len(page_n),len(redirection_n),len(no_page_n)])
print t2
print
t3 = PrettyTable(['-de-','Wiki pages', "Validated pages","Ambigious pages","Not validated pages"])
t3.add_row(['Masculine', len(page_m)+len(m_page_levenshtein),
len(page_m_validated)+len(m_page_levenshtein),
len(page_m_ambigious),len(page_m_other)])
t3.add_row(['Feminine', len(page_f), len(page_f_validated), len(page_f_ambigious),len(page_f_other)])
t3.add_row(['Neutral', len(page_n), len(page_n_validated), len(page_n_ambigious),len(page_n_other)])
print t3
print
t4 = PrettyTable(['--de--',"Redirects","Redirects to oposit gender", "Other redirects"])
t4.add_row(['Masculine',len(m_redirection),
len(m_links_to_feminine),len(m_links_to_smth)])
t4.add_row(['Feminine',len(f_redirection),
len(f_links_to_mascuilne)+len(f_links_to_mascuilne_lev)+len(f_links_to_mascuilne_with_link),len(f_links_to_smth)])
t4.add_row(['Neutral',len(redirection_n),"to M:"+str(len(n_links_to_masculine))+", to F:0",len(n_links_to_smth)])
print t4
```
# Look into other_redirects:
```
#Check whether its male label is sinonim or it is a field name
fields=load_simple_json('de/field.json')
f_links_to_field={}
f_links_to_realy_smth_else={}
f_links_to_masculine_that_already_counted={}
k=0
for i in f_links_to_smth:
if f_links_to_smth[i] in fields:
f_links_to_field[i]=f_links_to_smth[i]
elif words.has_key(f_links_to_smth[i]):
if f_links_to_smth[i] in page_m:
#print i," links to profesion that we already count:", f_links_to_smth[i]
f_links_to_masculine_that_already_counted[i]=f_links_to_smth[i]
else:
k+=1
f_links_to_realy_smth_else[i]=f_links_to_smth[i]
print "# of female prof. labels redirected to already counted male prof.=",len(f_links_to_masculine_that_already_counted)
print "# of female prof. labels that links to smth else",k
print "TODO: Work manually on te list:"
f_links_to_realy_smth_else
```
| github_jupyter |
# Sparkify: Customers Churn prediction

## Table of Contents
1. [Overview](#overview)
2. [Load and Clean Dataset](#load_clean)
1. [Checking for missing values](#check)
2. [Cleaning the data](#clean)
3. [Exploratory Data Analysis](./02_data_exploration.ipynb)
1. [Descriptive statistics](#stat)
2. [Defining churn indicator](#churn)
2. [Data Exploration](#explore)
4. [Feature Engineering](./03_feature_engineering.ipynb)
5. [Model training and evaluation](./04_model_training_evaluation.ipynb)
6. [Conclusion](./05_conclusion.ipynb)
## 1. Overview<a id='overview'></a>
This notebooks is done in the context of the Capstone project for `Data Science Nanodegree` Program by Udacity.
Sparkify is a music streaming service like Spotify and Pandora. The users can use the service either the `Premium` or the `Free Tier`. The premium plan with the monthly fees payment enables the use of service without any advertisements between songs.
The used data contains the user activity logs happening in the service. Those contain visited pages, service upgrade or downgrade events, events timestamps, demographic info, ...
```
! pip install -r requirements.txt
from pyspark.sql import SparkSession
from pyspark.sql.functions import isnan, when, count, col, udf
from pyspark.sql import functions as F
from pyspark.sql.types import StringType, DoubleType, LongType, IntegerType, DateType, TimestampType
from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier
from pyspark.ml.classification import GBTClassifier, RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import pandas as pd
spark = SparkSession \
.builder \
.appName("Sparkify") \
.getOrCreate()
events_data_path = "../data/mini_sparkify_event_data.json"
%store events_data_path
import warnings
warnings.filterwarnings('ignore')
%load_ext autoreload
%autoreload 2
import utils
# Make the output more like pandas and less like command-line SQL
spark.sparkContext.getConf().getAll()
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
spark
```
# <center> 2. Load and Clean Dataset <a id='load_clean'></a> </center>
1. Load the data.
2. Check for invalid or missing values e.g. records without userids or sessionids.
3. Clean the data
```
# Load the Sparkify Events data from a JSON file into a Spark DataFrame
events_df = spark.read.json(events_data_path)
events_df.printSchema()
nb_rows = events_df.count()
print("Number of rows: ", nb_rows)
print("Number of columns: ", len(events_df.columns))
```
# Checking for missing data<a id='check'></a>
```
dd = events_df.select([count(when(isnan(c), c)).alias(c) for c in events_df.columns]).toPandas().T
dd[dd[0] > 0]
```
The data set has no NaNs. In PySpark NaN is not the same as Null. Both of these are also different than an empty string "". So we check also Null values.
```
null_counts = events_df.select([
count(when(col(column_name).isNull(), column_name)).alias(column_name)
for column_name in events_df.columns
]).toPandas().T
null_counts
```
It appears that there are some users with misssing registration field, firstName and lastName. This might be related to non registered users.
### Checking rows with Null value in all columns
```
events_df.na.drop(how="all").count()
```
It appears that there is no row with Null value in all columns
### Check empty user IDs and sessionID
Regardless of the reason (logged out, unregistered...), empty userIds and sessionIds are useless for the purpose of detecting potentially churning users, so we can drop the corresponding records.
```
events_df.filter((
events_df["userId"] == "") | events_df["userId"].isNull() | isnan(events_df["userId"])).count()
events_df.filter((
events_df["sessionId"] == "") | events_df["sessionId"].isNull() | isnan(events_df["sessionId"])).count()
events_df.dropna(how = 'any', subset = ['userId', 'sessionId'])
events_df.count()
```
Checking for empty sessionId values shows that there are no missing values for sessionId column
### Number of distinct users
```
events_df = events_df.filter(events_df.userId != '')
events_df.select('userId').distinct().count()
```
### Check for Duplicated events logs
Comparing events_df.count() to events_df.distinct().count() shows that there are no duplicates
```
events_df.distinct().count()
```
### Let's re-check some of the empty fields after the cleaning
e.g. registration, gender, ...
```
null_counts = events_df.select([count(when(col(column_name).isNull(), column_name)).alias(column_name) for column_name in
events_df.columns]).toPandas().T
null_counts.sort_values(by=0, axis=0, ascending=False)
```
<div class="alert alert-block alert-warning">
It appears that the rows with missing values where related to the log events with no userId
</div>
```
pd.DataFrame(events_df.take(3), columns=events_df.columns).head()
events_df.count()
```
| github_jupyter |
# Concept Activation Vectors (CAVs) example on CIFAR data
### Peter Xenopoulos
In this workbook, we will go over how to use concept activation vectors (CAVs) on some popular image data -- the CIFAR datasets. These datasets are available through the `keras` package.
For our first experiment, we will build a classifier that distinguishes between "ship" and "bird" images from CIFAR-10. We start by calling the necessary libraries and data.
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.datasets import cifar100, cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import sys
import os
sys.path.insert(0, os.path.abspath('../..'))
from cav.tcav import *
np.random.seed(1996)
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# Keep ships (8) from CIFAR-10
interested_class = y_train == [8]
interested_class_indx = [i for i, x in enumerate(interested_class) if x]
x_train_class_one = x_train[interested_class_indx]
other = y_train == [2]
other_indx = [i for i, x in enumerate(other) if x]
x_train_class_two = x_train[other_indx]
x_train = np.append(x_train_class_one, x_train_class_two, axis = 0)
y_train = [1] * 5000
y_train = y_train + [0] * 5000
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
```
We can see two examples images below. This is a fairly easy classification problem.
```
f, axarr = plt.subplots(1,2)
axarr[0].imshow(x_train[0])
axarr[1].imshow(x_train[7777])
```
Now, let's say we are interested in how sensitive each class is to the concept of the "ocean". Clearly, the ship class, which we designate as 1, will likely be more sensitive.
```
(x_train_concept, y_train_concept), (x_test_concept, y_test_concept) = cifar100.load_data()
# keep sea (71) from CIFAR-100
concept = y_train_concept == [71]
indices = concept
indx_to_use = [i for i, x in enumerate(indices) if x]
x_train_concept = x_train_concept[indx_to_use]
```
Finally, we train and summarize our model. This is the standard classifier that the Keras team provides for CIFAR-10.
```
batch_size = 32
epochs = 5
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
# initiate optimizer
opt = keras.optimizers.Adam(lr=0.001)
# train the model
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, shuffle=True)
model.summary()
```
Now, we instantiate the `TCAV` object. The first thing one should do when using `TCAV` is assign it a model. There are two ways to do this: (1) use the `.set_model(model)` method OR (2) create a `TCAV` object using `TCAV(model = model)`. Please note that `model` should be a Keras sequential model. We provide an example below.
```
tcav_obj = TCAV()
tcav_obj.set_model(model)
```
Next, we must specify the "bottleneck" layer, and if that layer is a convolutional layer. One can split the model in the object through the `.split_model(bottleneck, conv_layer)` method. Please note that you should split on the last activation/pooling/dropout layer. So, we can split on layers 1, 5, 7 and 11 (starting counting from 0).
One the model has been split on the specified bottleneck layer, one must train the concept activation vector. This is done through the `.train_cav(x_concept)` method, which is passed the concept training data. In our case, this is the numpy array of sea images from CIFAR-100. The TCAV object takes care of creating counterexamples.
Next, we calculate the sensitivities for our training data through the `.calculate_sensitivity(x_train, y_labels)` method. This method saves the sensitivities for each training object in the `Object.sensitivity` attribute.
Finally, to print the sensitivities, simply use the `.print_sensitivity()` method.
```
tcav_obj.split_model(bottleneck = 1, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
tcav_obj.split_model(bottleneck = 5, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
[segmentor.image_segments[8] == 2,]
segmentor.discovery_images[8][0:5, 0:5].shape
segmentor.segment_images(n_segments=80, compactness=10, slic_zero=False)
plt.imshow(segmentor.image_segments[8])
segmentor.set_discovery_images(x_train[1:10,])
tcav_obj.split_model(bottleneck = 7, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
tcav_obj.split_model(bottleneck = 11, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
```
Above, we clearly see that the sea is a strong concept for class 1, which is the ship. Interestingly, the last layer gives a strong sensitivity to class 0, birds.
Next, we can use another example, which uses airplanes and birds. Our concept will be "clouds".
```
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# Keep airplanes from CIFAR-10
interested_class = y_train == [0]
interested_class_indx = [i for i, x in enumerate(interested_class) if x]
x_train_class_one = x_train[interested_class_indx]
other = y_train == [2]
other_indx = [i for i, x in enumerate(other) if x]
x_train_class_two = x_train[other_indx]
x_train = np.append(x_train_class_one, x_train_class_two, axis = 0)
y_train = [1] * 5000
y_train = y_train + [0] * 5000
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
f, axarr = plt.subplots(1,2)
axarr[0].imshow(x_train[0])
axarr[1].imshow(x_train[7777])
(x_train_concept, y_train_concept), (x_test_concept, y_test_concept) = cifar100.load_data()
# keep cloud (23) from CIFAR-100
concept = y_train_concept == [23]
indices = concept
indx_to_use = [i for i, x in enumerate(indices) if x]
x_train_concept = x_train_concept[indx_to_use]
# Set parameters
batch_size = 32
epochs = 5
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
# initiate optimizer
opt = keras.optimizers.Adam(lr=0.001)
# train the model
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, shuffle=True)
tcav_obj = TCAV()
tcav_obj.set_model(model)
tcav_obj.split_model(bottleneck = 1, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
tcav_obj.split_model(bottleneck = 5, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
tcav_obj.split_model(bottleneck = 7, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
tcav_obj.split_model(bottleneck = 11, conv_layer = True)
tcav_obj.train_cav(x_train_concept)
tcav_obj.calculate_sensitivity(x_train, y_train)
tcav_obj.print_sensitivity()
```
We can see that airplanes do have a high sensitivity to the "cloud" concept, as do birds in the early layers.
| github_jupyter |
```
import keras
from keras.models import load_model
import tensorflow as tf
from keras import backend as K
import sqmutils.data_utils as du
import os
import time
import pandas as pd
import numpy as np
import csv
import json
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
# Configs
```
model_dir = "models"
dataset_dir = "dataset"
model_weights = os.path.join(model_dir, "best_val_f1_model.h5")
# you can download test data from here:
# https://www.kaggle.com/c/quora-question-pairs/download/test.csv
test_dataset_path = "/home/elkhand/Downloads/test.csv"
cleaned_test_dataset_path = os.path.join(dataset_dir, "cleaned_test.csv")
test_probabilities_csv = os.path.join(dataset_dir, "test_probabilities.csv")
embedding_path = "/home/elkhand/datasets/fasttext/wiki.en.vec"
emb_dim = 300
config = du.get_config(None, None, None, embedding_dimension=emb_dim)
custom_objects= {"f1": du.f1, "recall" : du.recall, "precision" : du.precision}
```
# Reading test data
```
# Cleaning from duplicates and storing to file.
start = time.time()
dfTest = pd.read_csv(test_dataset_path, sep=',', encoding='utf-8')
end = time.time()
print("Total time passed", (end - start))
print("Total test examples", len(dfTest))
```
## Remove duplicates
```
start = time.time()
valid_ids =[type(x)==int for x in dfTest.test_id]
dfTest = dfTest[valid_ids].drop_duplicates()
dfTest = dfTest.replace(np.nan, '', regex=True)
dfTest = dfTest.fillna('')
dfTest.to_csv(cleaned_test_dataset_path, sep=',', encoding='utf-8', index=False)
end = time.time()
print("Total time passed", (end - start))
print("Total test examples", len(dfTest))
dfTest[:10]
```
# Load embeddings
We will be using Fasttext Wiki word vectors 300D
```
print("word vectors path", embedding_path)
start = time.time()
w2v = du.load_embedding(embedding_path)
end = time.time()
print("Total time passed: ", (end-start))
```
# Load pre-trained model
```
model = load_model(model_weights, custom_objects = custom_objects)
```
# Predict Test dataset probabilities
```
def write_to_csv_with_test_id(csv_file, results, testId_list):
print("testId_list", len(testId_list),"start: ", testId_list[0],"end: ", testId_list[-1], "len(results)", len(results))
if len(testId_list) != len(results):
print("\n ERROR!!!! \n")
index = 0
for test_id in testId_list:
line = str(test_id) + "," + str(round(results[index][0],1)) + "\n"
csv_file.write(line)
index += 1
start = time.time()
with open(test_probabilities_csv, "w") as csv_file:
#Write header
line = "test_id,is_duplicate" + "\n"
csv_file.write(line)
step_size = 20000
ranges = [i for i in range(step_size, len(dfTest) + step_size, step_size)]
start_index = 0
nanCount = 1
# Batch prediction
for to_index in ranges:
predict_start = time.time()
test_ids = list(dfTest[start_index:to_index]['test_id'])
df_test_q1_emb, df_test_q2_emb = du.load_dataset(dfTest[start_index:to_index], w2v, config, isTestDataset=True)
results = model.predict([df_test_q1_emb, df_test_q2_emb], verbose=0)
predict_end = time.time()
print("start_index",start_index,"to_index",to_index,"len(result)",len(results),"Pred time: ", (predict_end - predict_start))
write_to_csv_with_test_id(csv_file, results, test_ids)
start_index = to_index
end = time.time()
print("Total time passed", (end - start))
```
| github_jupyter |
# `plot_correlation()`: analyze correlations
## Overview
The function `plot_correlation()` explores the correlation between columns in various ways and using multiple correlation metrics. The following describes the functionality of `plot_correlation()` for a given dataframe `df`.
1. `plot_correlation(df)`: plots correlation matrices (correlations between all pairs of columns)
2. `plot_correlation(df, col1)`: plots the most correlated columns to column `col1`
3. `plot_correlation(df, col1, col2)`: plots the joint distribution of column `col1` and column `col2` and computes a regression line
The following table summarizes the output plots for different settings of `col1` and `col2`.
| `col1` | `col2` | Output |
| --- | --- | --- |
| None | None | *n*\**n* correlation matrix, computed with [Person](https://www.wikiwand.com/en/Pearson_correlation_coefficien), [Spearman](https://www.wikiwand.com/en/Spearman%27s_rank_correlation_coefficient), and [KendallTau](https://www.wikiwand.com/en/Kendall_rank_correlation_coefficient) correlation coefficients |
| Numerical | None | *n*\*1 correlation matrix, computed with Pearson, Spearman, and KendallTau correlation coefficients |
| Categorical | None | TODO |
| Numerical | Numerical | [scatter plot](https://www.wikiwand.com/en/Scatter_plot) with a regression line |
| Numerical | Categorical | TODO |
| Categorical | Numerical | TODO |
| Categorical | Categorical | TODO |
Next, we demonstrate the functionality of `plot_correlation()`.
## Load the dataset
`dataprep.eda` supports **Pandas** and **Dask** dataframes. Here, we will load the well-known [wine quality dataset](https://archive.ics.uci.edu/ml/datasets/wine+quality) into a Pandas dataframe.
```
from dataprep.datasets import load_dataset
df = load_dataset("wine-quality-red")
```
## Get an overview of the correlations with `plot_correlation(df)`
We start by calling `plot_correlation(df)` to compute the statistics and correlation matrices using Pearson, Spearman, and KendallTau correlation coefficients. For the Stats tab, we list four statistics for these three correlation coefficients respectively. Other three tabs are the lower triangular matrices. In each matrix, a cell represents the correlation value between two columns. There is an "insight" tab (!) in the upper right-hand corner of each matrix, which shows some insight information. The following shows an example:
```
from dataprep.eda import plot_correlation
plot_correlation(df)
```
## Find the columns that are most correlated to column `col1` with `plot_correlation(df, col1)`
After computing the correlation matrices, we can discover how other columns correlate to a specific column `x` using `plot_correlation(df, x)`. This function computes the correlation between column `x` and all other columns (using Pearson, Spearman, and KendallTau correlation coefficients), and sorts them in decreasing order. This enables easy determination of the columns that are most positively and negatively correlated with column `x`. The following shows an example:
```
plot_correlation(df, "alcohol")
```
## Explore the correlation between two columns with `plot_correlation(df, col1, col2)`
Furthermore, `plot_correlation(df, col1, col2)` provides detailed analysis of the correlation between two columns `col1` and `col2`. It plots the joint distribution of the columns `col1` and `col2` as a scatter plot, as well as a regression line. The following shows an example:
```
plot_correlation(df, "alcohol", "pH")
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Upload a Fairness Dashboard to Azure Machine Learning Studio
**This notebook shows how to generate and upload a fairness assessment dashboard from Fairlearn to AzureML Studio**
## Table of Contents
1. [Introduction](#Introduction)
1. [Loading the Data](#LoadingData)
1. [Processing the Data](#ProcessingData)
1. [Training Models](#TrainingModels)
1. [Logging in to AzureML](#LoginAzureML)
1. [Registering the Models](#RegisterModels)
1. [Using the Fairlearn Dashboard](#LocalDashboard)
1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)
1. Computing Fairness Metrics
1. Uploading to Azure
1. [Conclusion](#Conclusion)
<a id="Introduction"></a>
## Introduction
In this notebook, we walk through a simple example of using the `azureml-contrib-fairness` package to upload a collection of fairness statistics for a fairness dashboard. It is an example of integrating the [open source Fairlearn package](https://www.github.com/fairlearn/fairlearn) with Azure Machine Learning. This is not an example of fairness analysis or mitigation - this notebook simply shows how to get a fairness dashboard into the Azure Machine Learning portal. We will load the data and train a couple of simple models. We will then use Fairlearn to generate data for a Fairness dashboard, which we can upload to Azure Machine Learning portal and view there.
### Setup
To use this notebook, an Azure Machine Learning workspace is required.
Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.
This notebook also requires the following packages:
* `azureml-contrib-fairness`
* `fairlearn==0.4.6`
* `joblib`
* `shap`
Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
```
# !pip install --upgrade scikit-learn>=0.22.1
```
<a id="LoadingData"></a>
## Loading the Data
We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
```
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
import shap
```
Now we can load the data:
```
X_raw, Y = shap.datasets.adult()
```
We can take a look at some of the data. For example, the next cells shows the counts of the different races identified in the dataset:
```
print(X_raw["Race"].value_counts().to_dict())
```
<a id="ProcessingData"></a>
## Processing the Data
With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and put the rest of the feature data into `X`:
```
A = X_raw[['Sex','Race']]
X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)
X = pd.get_dummies(X)
```
Next, we apply a standard set of scalings:
```
sc = StandardScaler()
X_scaled = sc.fit_transform(X)
X_scaled = pd.DataFrame(X_scaled, columns=X.columns)
le = LabelEncoder()
Y = le.fit_transform(Y)
```
Finally, we can then split our data into training and test sets, and also make the labels on our test portion of `A` human-readable:
```
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
# Improve labels
A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'
A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'
A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'
A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'
A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'
A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'
A_test.Race.loc[(A_test['Race'] == 4)] = 'White'
```
<a id="TrainingModels"></a>
## Training Models
We now train a couple of different models on our data. The `adult` census dataset is a classification problem - the goal is to predict whether a particular individual exceeds an income threshold. For the purpose of generating a dashboard to upload, it is sufficient to train two basic classifiers. First, a logistic regression classifier:
```
lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
lr_predictor.fit(X_train, Y_train)
```
And for comparison, a support vector classifier:
```
svm_predictor = svm.SVC()
svm_predictor.fit(X_train, Y_train)
```
<a id="LoginAzureML"></a>
## Logging in to AzureML
With our two classifiers trained, we can log into our AzureML workspace:
```
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
```
<a id="RegisterModels"></a>
## Registering the Models
Next, we register our models. By default, the subroutine which uploads the models checks that the names provided correspond to registered models in the workspace. We define a utility routine to do the registering:
```
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
```
Now, we register the models. For convenience in subsequent method calls, we store the results in a dictionary, which maps the `id` of the registered model (a string in `name:version` format) to the predictor itself:
```
model_dict = {}
lr_reg_id = register_model("fairness_linear_regression", lr_predictor)
model_dict[lr_reg_id] = lr_predictor
svm_reg_id = register_model("fairness_svm", svm_predictor)
model_dict[svm_reg_id] = svm_predictor
```
<a id="LocalDashboard"></a>
## Using the Fairlearn Dashboard
We can now examine the fairness of the two models we have training, both as a function of race and (binary) sex. Before uploading the dashboard to the AzureML portal, we will first instantiate a local instance of the Fairlearn dashboard.
Regardless of the viewing location, the dashboard is based on three things - the true values, the model predictions and the sensitive feature values. The dashboard can use predictions from multiple models and multiple sensitive features if desired (as we are doing here).
Our first step is to generate a dictionary mapping the `id` of the registered model to the corresponding array of predictions:
```
ys_pred = {}
for n, p in model_dict.items():
ys_pred[n] = p.predict(X_test)
```
We can examine these predictions in a locally invoked Fairlearn dashboard. This can be compared to the dashboard uploaded to the portal (in the next section):
```
from fairlearn.widget import FairlearnDashboard
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=ys_pred)
```
<a id="AzureUpload"></a>
## Uploading a Fairness Dashboard to Azure
Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. The required stages are therefore:
1. Precompute all the required metrics
1. Upload to Azure
### Computing Fairness Metrics
We use Fairlearn to create a dictionary which contains all the data required to display a dashboard. This includes both the raw data (true values, predicted values and sensitive features), and also the fairness metrics. The API is similar to that used to invoke the Dashboard locally. However, there are a few minor changes to the API, and the type of problem being examined (binary classification, regression etc.) needs to be specified explicitly:
```
sf = { 'Race': A_test.Race, 'Sex': A_test.Sex }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=ys_pred,
sensitive_features=sf,
prediction_type='binary_classification')
```
The `_create_group_metric_set()` method is currently underscored since its exact design is not yet final in Fairlearn.
### Uploading to Azure
We can now import the `azureml.contrib.fairness` package itself. We will round-trip the data, so there are two required subroutines:
```
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
```
Finally, we can upload the generated dictionary to AzureML. The upload method requires a run, so we first create an experiment and a run. The uploaded dashboard can be seen on the corresponding Run Details page in AzureML Studio. For completeness, we also download the dashboard dictionary which we uploaded.
```
exp = Experiment(ws, "notebook-01")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Sample notebook upload"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
```
Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
```
print(dash_dict == downloaded_dict)
```
<a id="Conclusion"></a>
## Conclusion
In this notebook we have demonstrated how to generate and upload a fairness dashboard to AzureML Studio. We have not discussed how to analyse the results and apply mitigations. Those topics will be covered elsewhere.
| github_jupyter |
# Fitting a straight line to data
See also: the lecture notes notebook in the same directory
How we'll do this, in pseudo-code:
```
Iterate:
- I'll talk for a bit
- If we hit an exercise
- split into groups of 3 to solve it (5 minutes per exercise)
- one group will come up and implement on my computer
```
Python imports we'll need later...
```
from IPython import display
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
plt.style.use('apw-notebook.mplstyle')
%matplotlib inline
rnd = np.random.RandomState(seed=42)
```
$$
y = a\,x + b
$$
```
n_data = 16 # number of data points
a_true = 1.255 # randomly chosen truth
b_true = 4.507
# randomly generate some x values over some domain by sampling from a uniform distribution
x = rnd.uniform(0, 2., n_data)
x.sort() # sort the values in place
# evaluate the true model at the given x values
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
y = rnd.normal(y, y_err) # re-sample y data with noise
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
---
Now forget everything we just did! Someone just handed these data to you and we want to fit a model.
### Exercise 1:
Implement the functions to compute the weighted deviations below
```
def line_model(pars, x):
"""
Evaluate a straight line model at the input x values.
Parameters
----------
pars : list, array
This should be a length-2 array or list containing the
parameter values (a, b) for the (slope, intercept).
x : numeric, list, array
The coordinate values.
Returns
-------
y : array
The computed y values at each input x.
"""
return pars[0]*np.array(x) + pars[1]
def weighted_absolute_deviation(pars, x, y, y_err):
"""
Compute the weighted absolute deviation between the data
(x, y, y_err) and the model points computed with the input
parameters (pars).
"""
# IMPLEMENT ME
pass
def weighted_squared_deviation(pars, x, y, y_err):
"""
Compute the weighted squared deviation between the data
(x, y, y_err) and the model points computed with the input
parameters (pars).
"""
# IMPLEMENT ME
pass
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-2., a_true+2, 256)
b_grid = np.linspace(b_true-2., b_true+2, 256)
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
# a reshaped 256x256 grid of parameter values:
ab_grid.shape
fig,axes = plt.subplots(1, 2, figsize=(9,5.1), sharex=True, sharey=True)
for i,func in enumerate([weighted_absolute_deviation, weighted_squared_deviation]):
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, x, y, y_err)
axes[i].pcolormesh(a_grid, b_grid, func_vals.reshape(a_grid.shape),
cmap='Blues', vmin=func_vals.min(), vmax=func_vals.min()+256) # arbitrary scale
axes[i].set_xlabel('$a$')
# plot the truth
axes[i].plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
axes[i].axis('tight')
axes[i].set_title(func.__name__, fontsize=14)
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
Now we'll use one of the numerical function minimizers from Scipy to minimize these two functions and compare the resulting "fits":
```
x0 = [1., 1.] # starting guess for the optimizer
result_abs = minimize(weighted_absolute_deviation, x0=x0,
args=(x, y, y_err), # passed to the weighted_*_deviation function after pars
method='BFGS') # similar to Newton's method
result_sq = minimize(weighted_squared_deviation, x0=x0,
args=(x, y, y_err), # passed to the weighted_*_deviation function after pars
method='BFGS')
best_pars_abs = result_abs.x
best_pars_sq = result_sq.x
```
Let's now plot our two best-fit lines over the data:
```
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line_model(best_pars_abs, x_grid),
marker='', linestyle='-', label='absolute deviation')
plt.plot(x_grid, line_model(best_pars_sq, x_grid),
marker='', linestyle='-', label='squared deviation')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
```
### Least-squares / maximum likelihood with matrix calculus
$$
\newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
\newcommand{\bs}[1]{\boldsymbol{#1}}
\bs{\theta}_{\rm best} = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1} \,
\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{y}
$$
$$
\newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
\newcommand{\bs}[1]{\boldsymbol{#1}}
C = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1}
$$
### Exercise 2:
Implement the necessary linear algebra to solve for the best-fit parameters and the parameter covariance matrix, defined above.
```
# create matrices and vectors:
# Define the design matrix, X:
# X =
# Define the data covariance matrix, Cov:
# Cov =
Cinv = np.linalg.inv(Cov) # we'll need the inverse covariance matrix below
X.shape, Cov.shape, y.shape
# Write the matrix operations to get the parameter covariance matrix. It
# might help to know that you can transpose a numpy array with .T and that
# you can multiply two matrices or a matrix and a vector with the
# new @ operator (Python >=3.5):
# pars_Cov =
# Write out the necessary matrix operations to get the optimal parameters.
# Use the pars_Cov object from above:
# best_pars_linalg =
best_pars_sq - best_pars_linalg[::-1]
```
Now we'll plot the 1 and 2-sigma error ellipses using the parameter covariance matrix:
```
# some tricks to get info we need to plot an ellipse, aligned with
# the eigenvectors of the covariance matrix
eigval,eigvec = np.linalg.eig(pars_Cov)
angle = np.degrees(np.arctan2(eigvec[1,0], eigvec[0,0]))
w,h = 2*np.sqrt(eigval)
from matplotlib.patches import Ellipse
fig,ax = plt.subplots(1, 1, figsize=(5,5))
for n in [1,2]:
ax.add_patch(Ellipse(best_pars_linalg, width=n*w, height=n*h, angle=angle,
fill=False, linewidth=3-n, edgecolor='#555555',
label=r'{}$\sigma$'.format(n)))
ax.plot(b_true, a_true, marker='o', zorder=10, linestyle='none',
color='#de2d26', label='truth')
ax.set_xlabel('$b$')
ax.set_ylabel('$a$')
ax.legend(loc='best')
fig.tight_layout()
```
## The Bayesian approach
Recall that:
$$
\ln\mathcal{L} = -\frac{1}{2}\left[N\,\ln(2\pi)
+ \ln|\boldsymbol{\Sigma}|
+ \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
\boldsymbol{\Sigma}^{-1} \,
\left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)
\right]
$$
but, with our assumptions, $\boldsymbol{\Sigma}$ is diagonal. We can replace the matrix operations with sums.
### Exercise 3:
Implement the log-prior method (`ln_prior`) on the model class below.
```
class StraightLineModel(object):
def __init__(self, x, y, y_err):
"""
We store the data as attributes of the object so we don't have to
keep passing it in to the methods that compute the probabilities.
"""
self.x = np.asarray(x)
self.y = np.asarray(y)
self.y_err = np.asarray(y_err)
def ln_likelihood(self, pars):
"""
We don't need to pass in the data because we can access it from the
attributes. This is basically the same as the weighted squared
deviation function, but includes the constant normalizations for the
Gaussian likelihood.
"""
N = len(self.y)
dy = self.y - line_model(pars, self.x)
ivar = 1 / self.y_err**2 # inverse-variance
return -0.5 * (N*np.log(2*np.pi) + np.sum(2*np.log(self.y_err)) + np.sum(dy**2 * ivar))
def ln_prior(self, pars):
"""
The prior only depends on the parameters, so we don't need to touch
the data at all. We're going to implement a flat (uniform) prior
over the ranges:
a : [0, 100]
b : [-50, 50]
"""
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
# IMPLEMENT ME
# if a is inside the range above, add log(1/100) to ln_prior_val, otherwise return -infinity
# IMPLEMENT ME
# if b is inside the range above, add log(1/100) to ln_prior_val, otherwise return -infinity
return ln_prior_val
def ln_posterior(self, pars):
"""
Up to a normalization constant, the log of the posterior pdf is just
the sum of the log likelihood plus the log prior.
"""
lnp = self.ln_prior(pars)
if np.isinf(lnp): # short-circuit if the prior is infinite (don't bother computing likelihood)
return lnp
lnL = self.ln_likelihood(pars)
lnprob = lnp + lnL
if np.isnan(lnprob):
return -np.inf
return lnprob
def __call__(self, pars):
return self.ln_posterior(pars)
# instantiate the model object with the data
model = StraightLineModel(x, y, y_err)
```
Now we'll repeat what we did above to map out the value of the log-posterior over a 2D grid of parameter values. Because we used a flat prior, you'll notice it looks identical to the visualization of the `weighted_squared_deviation` -- only the likelihood has any slope to it!
```
def evaluate_on_grid(func, a_grid, b_grid, args=()):
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, *args)
return func_vals.reshape(a_grid.shape)
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-5., a_true+5, 256)
b_grid = np.linspace(b_true-5., b_true+5, 256)
ln_prior_vals = evaluate_on_grid(model.ln_prior, a_grid, b_grid)
ln_like_vals = evaluate_on_grid(model.ln_likelihood, a_grid, b_grid)
ln_post_vals = evaluate_on_grid(model.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals, ln_like_vals, ln_post_vals]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
### Exercise 4:
Subclass the `StraightLineModel` class and implement a new prior. Replace the flat prior above with an uncorrelated 2D Gaussian centered on $(\mu_a,\mu_b) = (3., 5.5)$ with root-variances $(\sigma_a,\sigma_b) = (0.05, 0.05)$. Compare the 2D grid plot with the flat prior to the one with a Gaussian prior
```
class StraightLineModelGaussianPrior(StraightLineModel): # verbose names are a good thing!
def ln_prior(self, pars):
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
# IMPLEMENT ME
# prior on a is a Gaussian with mean, stddev = (3, 0.05)
# IMPLEMENT ME
# prior on b is a Gaussian with mean, stddev = (5.5, 0.05)
return ln_prior_val
model_Gprior = StraightLineModelGaussianPrior(x, y, y_err)
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
ln_prior_vals2 = evaluate_on_grid(model_Gprior.ln_prior, a_grid, b_grid)
ln_like_vals2 = evaluate_on_grid(model_Gprior.ln_likelihood, a_grid, b_grid)
ln_post_vals2 = evaluate_on_grid(model_Gprior.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals2, ln_like_vals2, ln_post_vals2]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
```
Well now switch back to using the uniform / flat prior.
---
## MCMC
The simplest MCMC algorithm is "Metropolis-Hastings". I'm not going to explain it in detail, but in pseudocode, it looks like this:
- Start from some position in parameter space, $\theta_0$ with posterior probability $\pi_0$
- Iterate from 1 to $N_{\rm steps}$:
- Sample an offset from $\delta\theta_0$ from some proposal distribution
- Compute a new parameter value using this offset, $\theta_{\rm new} = \theta_0 + \delta\theta_0$
- Evaluate the posterior probability at the new new parameter vector, $\pi_{\rm new}$
- Sample a uniform random number, $r \sim \mathcal{U}(0,1)$
- if $\pi_{\rm new}/\pi_0 > 1$ or $\pi_{\rm new}/\pi_0 > r$:
- store $\theta_{\rm new}$
- replace $\theta_0,\pi_0$ with $\theta_{\rm new},\pi_{\rm new}$
- else:
- store $\theta_0$ again
The proposal distribution has to be chosen and tuned by hand. We'll use a spherical / uncorrelated Gaussian distribution with root-variances set by hand:
```
def sample_proposal(*sigmas):
return np.random.normal(0., sigmas)
def run_metropolis_hastings(p0, n_steps, model, proposal_sigmas):
"""
Run a Metropolis-Hastings MCMC sampler to generate samples from the input
log-posterior function, starting from some initial parameter vector.
Parameters
----------
p0 : iterable
Initial parameter vector.
n_steps : int
Number of steps to run the sampler for.
model : StraightLineModel instance (or subclass)
A callable object that takes a parameter vector and computes
the log of the posterior pdf.
proposal_sigmas : list, array
A list of standard-deviations passed to the sample_proposal
function. These are like step sizes in each of the parameters.
"""
p0 = np.array(p0)
if len(proposal_sigmas) != len(p0):
raise ValueError("Proposal distribution should have same shape as parameter vector.")
# the objects we'll fill and return:
chain = np.zeros((n_steps, len(p0))) # parameter values at each step
ln_probs = np.zeros(n_steps) # log-probability values at each step
# we'll keep track of how many steps we accept to compute the acceptance fraction
n_accept = 0
# evaluate the log-posterior at the initial position and store starting position in chain
ln_probs[0] = model(p0)
chain[0] = p0
# loop through the number of steps requested and run MCMC
for i in range(1,n_steps):
# proposed new parameters
step = sample_proposal(*proposal_sigmas)
new_p = chain[i-1] + step
# compute log-posterior at new parameter values
new_ln_prob = model(new_p)
# log of the ratio of the new log-posterior to the previous log-posterior value
ln_prob_ratio = new_ln_prob - ln_probs[i-1]
if (ln_prob_ratio > 0) or (ln_prob_ratio > np.log(np.random.uniform())):
chain[i] = new_p
ln_probs[i] = new_ln_prob
n_accept += 1
else:
chain[i] = chain[i-1]
ln_probs[i] = ln_probs[i-1]
acc_frac = n_accept / n_steps
return chain, ln_probs, acc_frac
```
### Exercise 5:
Choose a starting position, values for `a` and `b` to start the MCMC from. In general, a good way to do this is to sample from the prior pdf. Generate values for `a` and `b` by sampling from a uniform distribution over the domain we defined above. Then, run the MCMC sampler from this initial position for 8192 steps. Play around with ("tune" as they say) the `proposal_sigmas` until you get an acceptance fraction around ~40%.
```
# starting position:
# p0 =
# execute run_metropolis_hastings():
# chain,probs,acc_frac = run_metropolis_hastings(...)
print("Acceptance fraction: {:.1%}".format(acc_frac))
```
---
Visualizing the MCMC chain:
```
fig,ax = plt.subplots(1, 1, figsize=(5,5))
ax.pcolormesh(a_grid, b_grid, ln_post_vals, # from the grid evaluation way above
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
ax.axis('tight')
fig.tight_layout()
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.plot(chain[:512,0], chain[:512,1], marker='', color='k', linewidth=1.)
ax.set_xlabel('$a$')
ax.set_ylabel('$b$')
```
We can also look at the individual parameter traces, i.e. the 1D functions of parameter value vs. step number for each parameter separately:
```
fig,axes = plt.subplots(len(p0), 1, figsize=(5,7), sharex=True)
for i in range(len(p0)):
axes[i].plot(chain[:,i], marker='', drawstyle='steps')
axes[0].axhline(a_true, color='r', label='true')
axes[0].legend(loc='best')
axes[0].set_ylabel('$a$')
axes[1].axhline(b_true, color='r')
axes[1].set_ylabel('$b$')
fig.tight_layout()
```
Remove the "burn-in" phase and thin the chains:
```
good_samples = chain[2000::8]
good_samples.shape
```
What values should we put in the abstract?
```
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
Recall that the true values are:
```
a_true, b_true
```
Plot lines sampled from the posterior pdf:
```
plt.figure(figsize=(6,5))
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
for pars in good_samples[:128]: # only plot 128 samples
plt.plot(x_grid, line(pars, x_grid),
marker='', linestyle='-', color='#3182bd', alpha=0.1, zorder=-10)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
Or, we can plot the samples using a _corner plot_ to visualize the structure of the 2D and 1D (marginal) posteriors:
```
# uncomment and run this line if the import fails:
# !source activate statsseminar; pip install corner
import corner
fig = corner.corner(chain[2000:], bins=32, labels=['$a$', '$b$'], truths=[a_true, b_true])
```
---
# Fitting a straight line to data with intrinsic scatter
```
V_true = 0.5**2
n_data = 42
# we'll keep the same parameters for the line as we used above
x = rnd.uniform(0, 2., n_data)
x.sort() # sort the values in place
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
# add Gaussian intrinsic width
y = rnd.normal(y, np.sqrt(y_err**2 + V_true)) # re-sample y data with noise and intrinsic scatter
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
```
Let's first naively fit the data assuming no intrinsic scatter using least-squares:
```
X = np.vander(x, N=2, increasing=True)
Cov = np.diag(y_err**2)
Cinv = np.linalg.inv(Cov)
best_pars = np.linalg.inv(X.T @ Cinv @ X) @ (X.T @ Cinv @ y)
pars_Cov = np.linalg.inv(X.T @ Cinv @ X)
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line(best_pars[::-1], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
```
The covariance matrix for the parameters is:
```
pars_Cov
```
### Exercise 6:
Subclass the `StraightLineModel` class and implement new prior and likelihood functions (`ln_prior` and `ln_likelihood`). The our model will now have 3 parameters: `a`, `b`, and `lnV` the log of the intrinsic scatter variance. Use flat priors on all of these parameters. In fact, we'll be even lazier and forget the constant normalization terms: if a parameter vector is within the ranges below, return 0. (log(1.)) otherwise return -infinity:
```
class StraightLineIntrinsicScatterModel(StraightLineModel):
def ln_prior(self, pars):
""" The prior only depends on the parameters """
a, b, lnV = pars
# flat priors on a, b, lnV: same bounds on each, (-100,100)
# IMPLEMENT ME
# this is only valid up to a numerical constant
return 0.
def ln_likelihood(self, pars):
""" The likelihood function evaluation requires a particular set of model parameters and the data """
a,b,lnV = pars
V = np.exp(lnV)
# IMPLEMENT ME
# the variance now has to include the intrinsic scatter V
```
---
```
scatter_model = StraightLineIntrinsicScatterModel(x, y, y_err)
x0 = [5., 5., 0.] # starting guess for the optimizer
# we have to minimize the negative log-likelihood to maximize the likelihood
result_ml_scatter = minimize(lambda *args: -scatter_model.ln_likelihood(*args),
x0=x0, method='BFGS')
result_ml_scatter
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line(result_ml_scatter.x[:2], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
V_true, np.exp(result_ml_scatter.x[2])
```
### Exercise 7:
To quantify our uncertainty in the parameters, we'll run MCMC using the new model. Run MCMC for 65536 steps and visualize the resulting chain. Make sure the acceptance fraction is between ~25-50%.
```
# IMPLEMENT ME
# p0 =
# chain,probs,acc_frac = ...
print("Acceptance fraction: {:.1%}".format(acc_frac))
# IMPLEMENT ME
# plot the 1D parameter traces
# IMPLEMENT ME
# make a corner plot of the samples after they have converged
# IMPLEMENT ME
# thin the chain, remove burn-in, and report median and percentiles:
# good_samples =
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b', r'\ln V']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
Compare this to the diagonal elements of the covariance matrix we got from ignoring the intrinsic scatter and doing least-squares fitting:
```
disp_str = ""
for i,name in zip([1,0], ['a', 'b']):
fmt_str = r'{name}={val:.2f} \pm {err:.2f}'
disp_str += fmt_str.format(name=name, val=best_pars[i], err=np.sqrt(pars_Cov[i,i]))
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
```
What do you notice about the percentiles of the marginal posterior samples as compared to the least-squares parameter uncertainties?
| github_jupyter |
```
import math
import copy
sample1 = """
.#..#
.....
#####
....#
...##
"""
sample1 = [l for l in sample1.splitlines() if len(l)>0]
sample1
def find_asteroids(data):
asteroids = []
for y, line in enumerate(data):
for x, char in enumerate(line):
if char == "#":
asteroids.append(dict(pos=(x,y)))
return asteroids
def calculate_distance(point, asteroids):
for a in asteroids:
dx = point[0] - a["pos"][0]
dy = point[1] - a["pos"][1]
l = math.sqrt(dx**2 + dy**2)
a["dist"] = (dx,dy,l)
d = math.gcd(dx, dy)
if d>0:
a["angle"] = (int(dx/d), int(dy/d))
return asteroids
def filter_visible(asteroids):
asteroids = sorted(asteroids, key=lambda x: x["dist"][2], reverse=True)
angle_dict = {}
for a in asteroids:
if "angle" in a:
angle_dict[a["angle"]] = a
return angle_dict
def find_visible(asteroids):
for a in asteroids:
x = calculate_distance(a["pos"], copy.deepcopy(asteroids))
x = filter_visible(x)
a["visible"] = len(x)
return asteroids, max(asteroids, key=lambda x: x["visible"])
asteroids = find_asteroids(sample1)
asteroids = calculate_distance((4,4), asteroids)
asteroids = filter_visible(asteroids)
aster = {a["pos"]: a for a in asteroids.values()}
for x in range(0,5):
line = ""
for y in range(0,5):
p = aster.get((x,y))
line += "#" if p is not None else "."
print(line)
asteroids = find_asteroids(sample1)
asteroids = find_visible(asteroids)
asteroids[1]
sample2 = """
......#.#.
#..#.#....
..#######.
.#.#.###..
.#..#.....
..#....#.#
#..#....#.
.##.#..###
##...#..#.
.#....####
"""
sample2 = [l for l in sample2.splitlines() if len(l)>0]
asteroids = find_asteroids(sample2)
asteroids = find_visible(asteroids)
asteroids[1]
sample3 = """
#.#...#.#.
.###....#.
.#....#...
##.#.#.#.#
....#.#.#.
.##..###.#
..#...##..
..##....##
......#...
.####.###.
"""
sample3 = [l for l in sample3.splitlines() if len(l)>0]
asteroids = find_asteroids(sample3)
asteroids = find_visible(asteroids)
asteroids[1]
sample5 = """
.#..##.###...#######
##.############..##.
.#.######.########.#
.###.#######.####.#.
#####.##.#.##.###.##
..#####..#.#########
####################
#.####....###.#.#.##
##.#################
#####.##.###..####..
..######..##.#######
####.##.####...##..#
.#####..#.######.###
##...#.##########...
#.##########.#######
.####.#.###.###.#.##
....##.##.###..#####
.#.#.###########.###
#.#.#.#####.####.###
###.##.####.##.#..##
"""
sample5 = [l for l in sample5.splitlines() if len(l)>0]
asteroids = find_asteroids(sample5)
asteroids = find_visible(asteroids)
asteroids[1]
with open("10-input.txt", "rt") as FILE:
data = FILE.readlines()
data = [d.strip() for d in data]
asteroids = find_asteroids(data)
asteroids = find_visible(asteroids)
asteroids[1]
```
# Part 2
```
sample_p2 = """
.#....#####...#..
##...##.#####..##
##...#...#.#####.
..#.........###..
..#.#.....#....##
"""
sample_p2 = [l for l in sample_p2.splitlines() if len(l)>0]
def plot(point, asteroids):
x_min = min(asteroids, key=lambda a: a["pos"][0])["pos"][0]
x_max = max(asteroids, key=lambda a: a["pos"][0])["pos"][0]
y_min = min(asteroids, key=lambda a: a["pos"][1])["pos"][1]
y_max = max(asteroids, key=lambda a: a["pos"][1])["pos"][1]
a_dict = {a["pos"]: ix for ix, a in enumerate(asteroids)}
for y in range(y_min, y_max+1):
line = ""
for x in range(x_min, x_max+1):
pos = (x,y)
a = a_dict.get(pos)
if pos == point:
line += "X"
else:
line += "#" if a is not None else "."
print (line)
plot((8, 3), find_asteroids(sample_p2))
from math import asin, acos, sqrt, degrees
def degrees_to(x, y):
l = sqrt(x**2 + y**2)
d = degrees(asin(x/l))
if x > 0 and y < 0:
d = 90+d
elif x < 0 and y < 0:
d = 270 + d
elif x < 0 and y >= 0:
d = 360 + d
return d
def sort_by_angle(point, data):
asteroids = find_asteroids(data)
asteroids = calculate_distance(point, asteroids)
asteroids = [a for a in asteroids if a.get("angle") is not None]
for a in asteroids:
A = a.get("angle")
a["angle"] = (A[0], A[1], degrees_to(-A[0], A[1]))
visible = filter_visible(asteroids)
for v in visible.values():
v["visible"] = True
asteroids.sort(key=lambda a: a["angle"][2])
return asteroids
def plot_d(asteroids, point=None):
x_min = min(asteroids, key=lambda a: a["pos"][0])["pos"][0]
x_max = max(asteroids, key=lambda a: a["pos"][0])["pos"][0]
y_min = min(asteroids, key=lambda a: a["pos"][1])["pos"][1]
y_max = max(asteroids, key=lambda a: a["pos"][1])["pos"][1]
a_dict = {a["pos"]: ix for ix, a in enumerate(asteroids)}
for y in range(y_min, y_max+1):
line = ""
for x in range(x_min, x_max+1):
pos = (x,y)
a = a_dict.get(pos)
if a is not None:
line += "{:02}".format(a)
elif (x,y) == point:
line += "XX"
else:
line += ".."
print (line)
asteroids = sort_by_angle((8,3), sample_p2)
plot_d(asteroids, point=(8,3))
asteroids[:5]
asteroids = sort_by_angle((11,13), sample5)
# We have more than 200 visible asteroids, so we only need to consider the current visible set
visible = [a for a in asteroids if a.get("visible") is not None]
visible[199]
asteroids = sort_by_angle((31,20), data)
# We have more than 200 visible asteroids, so we only need to consider the current visible set
visible = [a for a in asteroids if a.get("visible") is not None]
visible[199]
```
| github_jupyter |
# Demonstration book of WaveGlow
This demonstration book will:
1. Define components in the model
2. Load pre-trained model and generate waveform samples
A few notes:
* Official implementation is in https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2/waveglow
* Model here is re-implemented for tutorial purpose
* It is NOT intended to surpass the official implementation
* Post-processing such as de-noising is not include in this notebook
* Your contribution is welcome to improve it
Modules for WaveGlow are defined in `../sandbox/block_waveglow.py`. For convenience, I copy those modules to this notebook and demonstrate the usage.
The project to train and run a WaveGlow on CMU arctic database is available in `../project/05-nn-vocoder/waveglow`.
## 1. Define a WaveGlow
WaveGlow in the paper has a fixed model structure
* **Condition module**: process and up-sample input conditional features
* **Squeeze module**: squeeze the length of target waveform and input conditional features
* WaveGlow core: **3 WaveGlow blocks**, each block contains **4 flow steps**, and each flow steps contains **8 dilated conv layers**.
WaveGlow paper simply says **12 coupling layers and 12 invertible 1x1 convolutions**, **output 2 of the channels after every 4 coupling layers**.
But it is more convienient to define the casacade of one coulpling layer and one 1x1 conv layer as one **flow step**; then **4 flow steps** makes one WaveGlow block. The early outputs will be extracted from the output of the 1st and 2nd WaveGlow blocks.
('Flow step' may not be the best name here. I will use it here)
**During training**
* input feature is in shape (B, N, D), i.e., (Batch, Num_of_frame, Dimension)
* target waveorm is in shape (B, T, 1), i.e., (Batch, Time length, 1)
* maximize the likelihood $\sum_{b=1}^{3} \log \mathcal{N}(\boldsymbol{z}_{b}; \boldsymbol{0}, \boldsymbol{I}) + \log(\det|Jac|_b)$
$z_1, z_2$ are referred to as 'early output' -- latent z extracted from the 1st and 2nd WaveGlow block. This is also called multi-scale in Glow (Kingma, D. P. & Dhariwal, P. Glow: Generative Flow with Invertible 1x1 Convolutions. arXiv Prepr. arXiv1807.03039 (2018)).
```sh
.
===============================================
| WaveGlow Core |
| |---------------------------------------> log_detJac1
| |---------------------------------------> z1 -> N(0, I) (B, T/8, 2)
| | |
| | |------------------------> log_detJac2
| | |------------------------> z2 -> N(0, I) (B, T/8, 2)
| | | |
--------- (B, T/8, 8)| ----------- ----------- ----------- ---> log_detJac3
Waveform -->|squeeze| ------------> |WGBlock 1| -> |WGBlock 2| -> |WGBlock 3| ---> z3 -> N(0, I) (B, T/8, 4)
(B, T, 1) --------- | ----------- ----------- ----------- |
| ^ ^ ^ |
========|==============|==============|========
--------- ---------------|---------------
|squeeze| -----------------------------------
--------- (B, T/8, 8D)
^
| up-sampled features (B, T, D)
-----------
input_feat->|condition|
(B, N, D) -----------
```
**During generation**
* input feature is in shape (Batch, Num_of_frame, Dimension)
* target waveorm is in shape (Batch, Time length, 1)
* Draw random noise $\{\boldsymbol{z}_1, \boldsymbol{z}_2, \boldsymbol{z}_3\}$ and do reverse transformation
```sh
.
===============================================
| WaveGlow Core |
| |
| |--------------------------------------- z1 <- N(0, I) (B, T/8, 2)
| | |
| | |
| | |------------------------ z2 <- N(0, I) (B, T/8, 2)
| v v |
--------- (B, T/8, 8)| ----------- ----------- ----------- |
Waveform <--|de-sque| <----------- |WGBlock 1| <- |WGBlock 2| <- |WGBlock 3| <-- z3 <- N(0, I) (B, T/8, 4)
(B, T, 1) --------- | ----------- ----------- ----------- |
| ^ ^ ^ |
========|==============|==============|========
--------- ---------------|---------------
|squeeze| -----------------------------------
--------- (B, T/8, 8D)
^
| up-sampled features (B, T, D)
-----------
input_feat->|condition|
(B, N, D) -----------
```
Details of each module or block are illustrated in the following sections
### 1.1 Preparation
```
# load packages
from __future__ import absolute_import
from __future__ import print_function
import os
import sys
import numpy as np
import torch
import torch.nn as torch_nn
import torch.nn.functional as torch_nn_func
# basic nn blocks
import sandbox.block_nn as nii_nn
import sandbox.util_dsp as nii_dsp
import sandbox.block_glow as nii_glow
import core_scripts.data_io.conf as nii_io_conf
# misc functions for this demonstration book
from plot_tools import plot_API
from plot_tools import plot_lib
import tool_lib
import plot_lib as plot_lib_legacy
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (10, 5)
```
### 1.1 Condition module
It transforms and up-samples the input acoustic features (e.g., Mel-spec)
```sh
.
===================================
| condition module |
input_feat | ---------------------------- | up-sampled features
(Batch, frame_num, dimension) -> | | transposed convolution 1d| | -> (Batch, waveform_length, dimension)
| ---------------------------- |
===================================
```
Similar to the condition modules in WaveNet and many other vocoders, the waveform length = frame_num * up-samplg rate. The up-sampling rate is decided by the waveform sampling rate and the frame-shift when extracting the input features. For example, 5ms frame-shift on 16kHz waveform -> 16 * 5 = 80. Each frame must be up-sampled to a factor of 80.
A condition module can be in numerous ways. Here we try transposed convolution (https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html)
```
class upsampleByTransConv(torch_nn.Module):
"""upsampleByTransConv
Upsampling layer using transposed convolution
"""
def __init__(self, feat_dim, upsample_rate, window_ratio=5):
"""upsampleByTransConv(feat_dim, upsample_rate, window_ratio=5)
Args
----
feat_dim: int, input feature should be (batch, length, feat_dim)
upsample_rate, int, output feature will be
(batch, length*upsample_rate, feat_dim)
window_ratio: int, default 5, window length of transconv will be
upsample_rate * window_ratio
"""
super(upsampleByTransConv, self).__init__()
window_l = upsample_rate * window_ratio
self.m_layer = torch_nn.ConvTranspose1d(
feat_dim, feat_dim, window_l, stride=upsample_rate)
self.m_uprate = upsample_rate
return
def forward(self, x):
""" y = upsampleByTransConv(x)
input
-----
x: tensor, (batch, length, feat_dim)
output
------
y: tensor, (batch, length*upsample_rate, feat_dim)
"""
l = x.shape[1] * self.m_uprate
y = self.m_layer(x.permute(0, 2, 1))[:, :, 0:l]
return y.permute(0, 2, 1).contiguous()
# Example
batch = 2
frame_num = 10
dimension = 5
upsample_rate = 80
m_cond = upsampleByTransConv(dimension, upsample_rate)
input_data = torch.randn([batch, frame_num, dimension])
with torch.no_grad():
output_data = m_cond(input_data)
print("Input feature batch {:d}, frame {:d}, dim {:d} ".format(*input_data.shape))
print("Output feature batch {:d}, frame {:d}, dim {:d} ".format(*output_data.shape))
plot_API.plot_API([input_data[0, :, 0].numpy(), output_data[0, :, 0].numpy()], plot_lib.plot_signal, 'v',
{'sub': [{'title': "Input feature in 1st dimension of 1st data in the data", 'xlabel': 'Frame index'},
{'title': "Up-sampled feature", 'xlabel': "Time index"}],
'hspace': 0.5})
# The conv layer is randomly initialized, the up-sampled feature may be quite random.
```
### 1.2 Squeeze
Squeeze module changes the shape of the input feature.
Pay attention to the following points:
* How to align the elements in the last dimension when the last dimension is >1?
* Reverse operation should be implemented
* WaveGlow squeeze by a factor of 8
```sh
.
---------
Waveform <-> |squeeze| <-> Squeezed waveform
(B, T, 1) --------- (B, T/8, 8)
---------
Feature <-> |squeeze| <-> Squeezed Feature
(B, T, D) --------- (B, T/8, 8D)
```
```
class SqueezeForWaveGlow(torch_nn.Module):
"""SqueezeForWaveGlow
Squeeze layer for WaveGlow
"""
def __init__(self, mode = 1, mode_1_para=8):
"""SqueezeForGlow(mode=1)
Args
----
mode: int, mode of this squeeze layer
mode_1_para: int, factor of squeeze (default 8)
mode == 1: original squeeze method by squeezing 8 points
"""
super(SqueezeForWaveGlow, self).__init__()
self.m_mode = mode
self.m_mode_1_para = mode_1_para
return
def get_expected_squeeze_length(self, orig_length):
# return expected length after squeezing
if self.m_mode == 1:
return orig_length//self.m_mode_1_para
def get_squeeze_factor(self):
# return the configuration for squeezing
if self.m_mode == 1:
return self.m_mode_1_para
def forward(self, x):
"""SqueezeForWaveGlow(x)
input
-----
x: tensor, (batch, length, feat_dim)
output
------
y: tensor, (batch, length // squeeze, feat_dim * squeeze)
"""
if self.m_mode == 1:
# squeeze, the 8 points should be the last dimension
squeeze_len = x.shape[1] // self.m_mode_1_para
# trim length first
trim_len = squeeze_len * self.m_mode_1_para
x_tmp = x[:, 0:trim_len, :]
# (batch, time//squeeze_size, squeeze_size, dim)
x_tmp = x_tmp.view(x_tmp.shape[0], squeeze_len,
self.m_mode_1_para, -1)
# (batch, time//squeeze_size, dim, squeeze_size)
x_tmp = x_tmp.permute(0, 1, 3, 2).contiguous()
# (batch, time//squeeze_size, dim * squeeze_size)
return x_tmp.view(x_tmp.shape[0], squeeze_len, -1)
else:
print("SqueezeForWaveGlow not implemented")
return x_squeezed
def reverse(self, x_squeezed):
if self.m_mode == 1:
# (batch, time//squeeze_size, dim * squeeze_size)
batch, squeeze_len, squeeze_dim = x_squeezed.shape
# (batch, time//squeeze_size, dim, squeeze_size)
x_tmp = x_squeezed.view(
batch, squeeze_len, squeeze_dim // self.m_mode_1_para,
self.m_mode_1_para)
# (batch, time//squeeze_size, squeeze_size, dim)
x_tmp = x_tmp.permute(0, 1, 3, 2).contiguous()
# (batch, time, dim)
x = x_tmp.view(batch, squeeze_len * self.m_mode_1_para, -1)
else:
print("SqueezeForWaveGlow not implemented")
return x
```
Let's use example to show it.
For explanation, we set the squeeze factor to 3.
```
m_squeeze = SqueezeForWaveGlow(mode_1_para=3)
# First example
# last dimension size is 1, like waveform, (B, T, 1)
# create input (B=1, T=6, 1)
length = 6
input_data = torch.tensor([np.arange(length)+1]).T
input_data = input_data.unsqueeze(0)
# squeeze
with torch.no_grad():
squeezed_data = m_squeeze(input_data)
plot_lib_legacy.plot_tensor(input_data, title="Input data batch {:d}, length {:d}, dim {:d} ".format(*input_data.shape), color_on_value=True)
plot_lib_legacy.plot_tensor(squeezed_data, title="Squeezed data batch {:d}, length {:d}, dim {:d} ".format(*squeezed_data.shape), color_on_value=True)
```
Note that, **the heigth of the matrix in the figure corresponds to the time axis**
(For a tensor of shape (B, T, D), the height of the matrix in the figure corresponds to length T, the width of the matrix in the figure corresponds to dimension D.)
```
# Second example,
# data has shape (B=2, T=6, 2)
# create a data of shape (batch=2, length=6, dimension=2)
length = 6
input_data = torch.tensor([np.arange(length)+1, np.arange(length)*-1-1]).T
input_data = torch.stack([input_data, input_data], dim=0)
# squeeze
with torch.no_grad():
squeezed_data = m_squeeze(input_data)
plot_lib_legacy.plot_tensor(input_data, title="Input data batch {:d}, length {:d}, dim {:d} ".format(*input_data.shape), color_on_value=True)
plot_lib_legacy.plot_tensor(squeezed_data, title="Squeezed data batch {:d}, length {:d}, dim {:d} ".format(*squeezed_data.shape), color_on_value=True)
```
In the above example, the input data has shape (2, 6, 2). The squeezed data has shape (2, 2, 6)
input_data[0, 0:3, 0]=[1.0, 2.0, 3.0] and input_data[0, 0:3, 1] =[-1.0, -2.0, -3.0] are squeezed into squeezed_data[0, 0, 0:6]
```
print(input_data[0, 0:3, 0])
print(input_data[0, 0:3, 1])
print(squeezed_data[0, 0, 0:6])
```
**How to align the elements in the last dimension when the last dimension is >1?**
As the example shows, elements adjacent in time are adjace in the squeezed tensor. Therefore, squeezed_data[0, 0, 0:6] is [1, 2, 3, -1, -2, -3], **NOT** [1, -1, 2, -2, 3, -3]
**Reverse (de-squeeze)**
It is straightforward to de-squeeze
```
# de-squeeze
with torch.no_grad():
de_squeezed_data = m_squeeze.reverse(squeezed_data)
plot_lib_legacy.plot_tensor(de_squeezed_data, title="Recovered data data batch {:d}, length {:d}, dim {:d} ".format(*de_squeezed_data.shape), color_on_value=True)
```
### 1.3 First glimpse on WaveGlow core part
The WaveGlow core module contains **3 WaveGlow blocks (WGBlocks)**, each WGBlock contains **4 WaveGlow flow steps**.
```sh
. ===============================================
| WaveGlow Core |
| |---------------------------------------> log_detJac1
| |---------------------------------------> z1 -> N(0, I) (B, T/8, 2)
| | |
| | |------------------------> log_detJac2
| | |------------------------> z2 -> N(0, I) (B, T/8, 2)
| | | 3 |
| ----------- 1 ----------- 4 ----------- ---> log_detJac3
Squeezed ---> |WGBlock 1| -> |WGBlock 2| -> |WGBlock 3| ---> z3 -> N(0, I) (B, T/8, 4)
wave | ----------- ----------- ----------- |
(B, T/8, 8)| ^ ^ 2 ^ |
========|==============|==============|========
---------------|---------------
up-sampled and squeezed feature
(B, T/8, 8D)
```
**WaveGlow block (WGBlock)**
```sh
. |-----------> log_detJac
3 | |---> early output z
| | (B, T/8, d)
=====================================================|==========
| WaveGlow block | | |
| -------------> + -----------> + ------------ + | |
| | ^ ^ ^ | |
| | | | | | |
| ----------- ----------- ----------- ----------- | | 4
1 --> |Flowstep1| -> |Flowstep2| -> |Flowstep3| -> |Flowstep4| ---|-> input to next block x
(B, T/8, P)| ----------- ----------- ----------- ----------- | (B, T/8, P-d)
| ^ ^ ^ ^ |
========|==============|==============|==============|==========
| | | |
---------------|------------------------------
2 (B, T/8, 8D)
```
where input is
1. output of previous block or squeezed waveform (if this is the 1st block), shape (B, T/8, P)
2. up-sampled and squeezed condition features, shape (B, T/8, 8D)
output is:
3. early output z, (B, T/8, d) and log_det|Jac| (scalar)
4. input to the next WaveGlow block (B, T/8, P-d)
The input and output tensor shape for the three WaveGlow blocks are
| | 1 input | 2 condition | 3 early output z | 4 output latent x |
|-----------------|---|---|---|---|
| WaveGlow Block1 | (B, T/8, 8) | (B, T/8, 8D) | (B, T/8, 2) | (B, T/8, 6) |
| WaveGlow Block2 | (B, T/8, 6) | (B, T/8, 8D) | (B, T/8, 2) | (B, T/8, 4) |
| WaveGlow Block3 | (B, T/8, 4) | (B, T/8, 8D) | (B, T/8, 4) | -|
Details of flow step, WaveGlow block will be defined in the following sectioins
### 1.4. One flowstep
**An Flowstep block** looks like this:
It contains an invertiable 1x1 conv and an affine transformation layer.
The parameters for affine transformation are produced by a WaveNet block.
```sh
. |---------------> log_detJac
==============================================================================================|==============
| Flow step of WaveGlow | |
| r (B, T/8, P/2) ---------------------- |
| |----------------------------------------------------------------->| Affine transform | |
| | | ra + b | |
| | ============================================================= | / a (B, T/8, P/2) | |
| ------------ | |WaveNet blocks|--------------> + --------------> + --|FC|--|--->| | |
| |invertible| / | | | | | | \ b (B, T/8, P/2) | |
input --> | 1x1 conv | \ | ---- --------------- --------------- --------------- | ---------------------- |
(B, T/8, P)| ------------ |---|->|FC|->|WaveNetBlock1|->|WaveNetBlock2|...|WaveNetBlock8| | | |
| | | ---- --------------- --------------- --------------- | | |
| | | ^ ^ ^ | | ra + b |
| | ===============|================|=================|========== v |
| | | | | q ----------- |
| |------------------|----------------|-----------------|---------------->| Concate | -------|--> output
| q (B, T/8, P/2) | | | ----------- | (B, T/8, P)
====================================|================|=================|=====================================
| | |
------------------------------------
2
(B, T/8, 8D)
```
#### 1.4.1 Invertible 1x1 conv
The name "invertible 1x1 conv" may be hard to understand. But it contains two things:
* 1x1 conv: for 1D data, I prefer to naming it as a "fully-connected" layer -- indeed it is implemented as `torch.matmul(data, weight)` not `torch.conv1d`;
* invertible: the transformation matrix `weight` is a square matrix, and it should be invertible. It should be inverible even after updating its parameter through model training
In all, "invertible 1x1 conv" means $y=xA$, and its reverse transformation is $x=yA^{-1}$. Both $x$ and $y$ has shape $(B, T/8, P)$, while $A$ is a matrix of size $(P, P)$.
##### **How can it shuffle dimension? A toy example**
```
# create a data of shape (batch=1, length=3, dimension=4)
length = 3
input_data = torch.tensor([np.arange(length)+1, np.arange(length)*-1-1, np.arange(length)+10, np.arange(length)*-1-10], dtype=torch.float32).T
input_data = input_data.unsqueeze(0)
# create a transformation matrix
weight_mat = torch.tensor([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0]], dtype=input_data.dtype)
# transform
with torch.no_grad():
output_data = torch.matmul(input_data, weight_mat)
from plot_tools import table_API
print("Input data x:")
table_API.print_table(input_data[0].numpy(), None, None, print_latex_table=False, print_format="3.1f")
print("Transformation matrix A:")
table_API.print_table(weight_mat.numpy(), None, None, print_latex_table=False, print_format="3.1f")
print("Transformed data y=xA:")
table_API.print_table(output_data[0].numpy(), None, None, print_latex_table=False, print_format="3.1f")
# inverse transformation
with torch.no_grad():
weight_mat_inv = torch.inverse(weight_mat)
reversed_data = torch.matmul(output_data, weight_mat_inv)
print("Inverted transformation matrix A^-1:")
table_API.print_table(weight_mat_inv.numpy(), None, None, print_latex_table=False, print_format="3.1f")
print("Revere transformed data x=yA^-1:")
table_API.print_table(reversed_data[0].numpy(), None, None, print_latex_table=False, print_format="3.1f")
```
Note that the input example data has shape (1, 3, 4).
By using transformation matrix, we can see how the rows (i.e., last dimension of input data) is shuffled.
The transformation matrix here is a simple Permutation matrix https://en.wikipedia.org/wiki/Permutation_matrix. Such a permutation matrix is of course invertible.
##### **How can it shuffle dimension? A practical example in Glow**
We can use permutation matrix, generalized permutation matrix (https://en.wikipedia.org/wiki/Generalized_permutation_matrix), rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) or any invertible matrix.
In neural network, the randomly initialized matrix may not be invertible. We have to manually get an invertible matrix from the randomly initialized one.
Also, we need to make sure that the updated matrix after model training is also invertible.
For all these requirements, I like the idea in the original Glow paper (Eq.(10) in Kingma, D. P. & Dhariwal, P. Glow: Generative Flow with Invertible 1x1 Convolutions. in Proc. NIPS (2018).)
$ \boldsymbol{A} = \boldsymbol{P}\boldsymbol{L}(\boldsymbol{U} + \text{diag}(\boldsymbol{s}))
= \boldsymbol{P}\boldsymbol{L}\Big(\boldsymbol{U} + \text{sign}(\text{diag}(\boldsymbol{s}))\exp(\log|\text{diag}(\boldsymbol{s})|)\Big)$
where "P is a permutation matrix, L is a lower triangular matrix with ones on the diagonal, U is an upper triangular matrix with zeros on the diagonal, and s is a vector. ... In this parameterization, we initialize the parameters by first sampling a random rotation matrix W, then computing the corresponding value of P (which remains fixed) and the corresponding initial values of L and U and s (which are optimized)." (Kingma 2018)
The advatanges:
1. Easy to invert and will be intertible
2. Easy to compute the determinant of Jacobian matrix
Let's show how it works:
```
# How to compose matrix A
# I will use scipy and numpy
import scipy.linalg
feat_dim = 4
# step1. create an initial matrix that is invertible (a unitary matrix)
# https://en.wikipedia.org/wiki/Unitary_matrix
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.qr.html
seed_mat = np.random.randn(feat_dim, feat_dim)
# use QR decomposition to get the rotation_mat, which is a unitary matrix
rotation_mat, _ = scipy.linalg.qr(seed_mat)
# step2. decompose it into
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lu.html
permute_mat, lower_mat, upper_mat = scipy.linalg.lu(rotation_mat)
# step3. deal with the diagonal line of lower and upper mat
u_mask = np.triu(np.ones_like(seed_mat), k=1)
d_mask = u_mask.T
eye_mat = np.eye(feat_dim)
# get the diag(s) from upper_mat
tmp_diag_line = upper_mat.diagonal().copy()
# decompose tmp_diag_line into sign(s) * exp(log(|s|)), this makes it easy to compute Determinant Jacobian
sign_tmp_diag_line = np.sign(tmp_diag_line)
logabs_tmp_diag_line = np.log(np.abs(tmp_diag_line))
# upper triangle mat
upper_mat_new = upper_mat * u_mask
print("Randomly initialized invertible matrix\n A = PL(U+diag(s)):")
table_API.print_table(rotation_mat, None, None, print_latex_table=False, print_format="3.3f")
print("P:")
table_API.print_table(permute_mat, None, None, print_latex_table=False, print_format="3.3f")
print("L:")
table_API.print_table(lower_mat, None, None, print_latex_table=False, print_format="3.3f")
print("U:")
table_API.print_table(upper_mat_new, None, None, print_latex_table=False, print_format="3.3f")
print("diag(s):")
table_API.print_table(np.diag(tmp_diag_line), None, None, print_latex_table=False, print_format="3.3f")
# You can test whether the decomposing is correct
A_computed = np.dot(np.dot(permute_mat, lower_mat), upper_mat_new + np.diag(tmp_diag_line) )
print("Verify the value of PL(U+diag(s)):")
table_API.print_table(A_computed, None, None, print_latex_table=False, print_format="3.3f")
print("\n\nIn Glow, the lower matrix L is manully modified so that its diagonal line is 1.")
print("This is merely a practical choice -- the determinant of the Jacobian matrix will only be determined by diag(s)")
# change lower triangle mat
lower_mat_new = lower_mat * d_mask + eye_mat
print("Modified L:")
table_API.print_table(lower_mat_new, None, None, print_latex_table=False, print_format="3.3f")
# Let's try to "shuffle" the input data
# We will use the modified L
# step1. always remember to compose the transformation matrix by
# P L (U + diag( sign(s)exp(log|s|)))
A_computed = np.dot(np.dot(permute_mat, lower_mat_new), upper_mat_new + np.diag(sign_tmp_diag_line * np.exp(logabs_tmp_diag_line)))
weight_mat = torch.tensor(A_computed, dtype=torch.float32)
print("Transformation matrix A:")
table_API.print_table(weight_mat.numpy(), None, None, print_latex_table=False, print_format="3.3f")
# create a data of shape (batch=1, length=3, dimension=4)
length = 3
input_data = torch.tensor([np.arange(length)+1, np.arange(length)*-1-1, np.arange(length)+10, np.arange(length)*-1-10], dtype=torch.float32).T
input_data = input_data.unsqueeze(0)
feat_dim = input_data.shape[-1]
# transform
with torch.no_grad():
output_data = torch.matmul(input_data, weight_mat)
from plot_tools import table_API
print("Input data x:")
table_API.print_table(input_data[0].numpy(), None, None, print_latex_table=False, print_format="3.1f")
print("Transformed data y=xA:")
table_API.print_table(output_data[0].numpy(), None, None, print_latex_table=False, print_format="3.3f")
# inverse transformation
with torch.no_grad():
weight_mat_inv = torch.inverse(weight_mat)
reversed_data = torch.matmul(output_data, weight_mat_inv)
print("Inverted transformation matrix A^-1:")
table_API.print_table(weight_mat_inv.numpy(), None, None, print_latex_table=False, print_format="3.3f")
print("Revere transformed data x=yA^-1:")
table_API.print_table(reversed_data[0].numpy(), None, None, print_latex_table=False, print_format="3.1f")
```
It is hard to see how the dimensions are shuffled. But it is a way to mix information from different dimensions.
##### **How to compute the Determinant of Jacobian**
This has been explained in Table 1 of Glow paper (Kingma 2018):
For $\boldsymbol{y} = \boldsymbol{x}\boldsymbol{A}$ where $\boldsymbol{y}, \boldsymbol{x}\in\mathbb{R}^{B\times T\times D}$ and $\boldsymbol{A} = \boldsymbol{P}\boldsymbol{L}(\boldsymbol{U} + \text{diag}(\boldsymbol{s}))
= \boldsymbol{P}\boldsymbol{L}\Big(\boldsymbol{U} + \text{sign}(\text{diag}(\boldsymbol{s}))\exp(\log|\text{diag}(\boldsymbol{s})|)\Big)$
The log Determinant of Jacobian is
$B\cdot{T}\cdot\text{sum}(\log|\text{diag}(\boldsymbol{s})|)$
The $B$ and $T$ are there because the transformation is conducted for every time step and every data in the mini-batch. In other words, we are transforming $BT$ vectors simultaneously.
(In practise, we can ignore the $B$ if we assign the same value $T\cdot\text{sum}(\log|\text{diag}(\boldsymbol{s})|)$ to each data in the mini-batch and sum the values later. )
```
data_factor = np.prod(input_data.shape[1:-1])
print("logDetJac is: ", data_factor * np.sum(logabs_tmp_diag_line))
```
##### **Pytorch API**
The Pytorch API for Glow-style 1x1 invertible transformation is defined in `../sandbox/block_glow.py`.
It wrapps the explanations above into a single module
In Official WaveGlow implementation, the invertible 1x1 conv is in a different flavor from Glow:
* It simply compute the initial invertible matrix W through QR decomposition (https://pytorch.org/docs/stable/generated/torch.qr.html)
* It did NOT decompose W further
```
class Invertible1x1ConvWaveGlow(torch.nn.Module):
def __init__(self, feat_dim, flag_detjac=False):
"""
Args
----
feat_dim: int, dimension of the input feature,
flag_detjac: bool, whether compute the Log DetJacobian in forward()
input data should have shape (batch, length, feat_dim)
"""
super(Invertible1x1ConvWaveGlow, self).__init__()
torch.manual_seed(100)
with torch.no_grad():
# QR decomposition
W = torch.qr(torch.FloatTensor(feat_dim, feat_dim).normal_())[0]
# Ensure determinant is 1.0 not -1.0
if torch.det(W) < 0:
W[:,0] = -1*W[:,0]
# not necessary
W = W.transpose(0, 1)
self.weight = torch_nn.Parameter(W)
self.weight_inv = torch_nn.Parameter(W.clone())
self.weight_inv_flag = False
self.flag_detjac = flag_detjac
return
def forward(self, y, factor):
"""
input
-----
y: tensor, (batch, length, dim)
factor: int, the factor related to the mini-match size
"""
batch_size, length, feat_dim = y.size()
# Forward computation
log_det_W = length / factor * torch.logdet(self.weight)
z = torch.matmul(y, self.weight)
if self.flag_detjac:
return z, log_det_W
else:
return z
def reverse(self, x):
self.weight_inv.data = torch.inverse(self.weight.data)
self.weight_inv_flag = True
return torch.matmul(x, self.weight_inv)
```
In the above API method forward(), I added a scaling factor as argument.
The reason is that, although we compute ${T}\cdot\text{sum}(\log|\text{diag}(\boldsymbol{s})|)$ for each mini-batch,
The $T$ may be quite large for speech data and its value may explode.
Since at the end of the model training loop, we may divide the total likelihood by the number of data elements in the mini-batch, why not do the division inside each module in advance? For example, ${T}\cdot\text{sum}(\log|\text{diag}(\boldsymbol{s})|) / {T}$ may be less likely to explode.
When training the model, I set the factor to be `factor = np.prod([dim for dim in waveglow_input_data.shape])`
```
# create a data of shape (batch=1, length=3, dimension=4)
length = 3
input_data = torch.tensor([np.arange(length)+1, np.arange(length)*-1-1, np.arange(length)+10, np.arange(length)*-1-10], dtype=torch.float32).T
input_data = input_data.unsqueeze(0)
feat_dim = input_data.shape[-1]
m_inv1x1 = Invertible1x1ConvWaveGlow(feat_dim, flag_detjac=True)
with torch.no_grad():
transformed_data, logDetJac = m_inv1x1(input_data, 1)
recovered_data = m_inv1x1.reverse(transformed_data)
print("Input data x:")
table_API.print_table(input_data[0].numpy(), None, None, print_latex_table=False, print_format="3.3f")
print("Transformed data y=xA:")
table_API.print_table(transformed_data[0].numpy(), None, None, print_latex_table=False, print_format="3.3f")
print("Revere transformed data x=yA^-1:")
table_API.print_table(reversed_data[0].numpy(), None, None, print_latex_table=False, print_format="3.3f")
print("Log Determinant Jacobian:\n", logDetJac.item())
print("\n\n")
```
#### 1.4.2 Bipartite affine transformation in WaveGlow
$\begin{align}
[\boldsymbol{y1}, \boldsymbol{y2}] &= \text{split}(\boldsymbol{y}) \\
[\log\boldsymbol{a}, \boldsymbol{b}] &= \text{NN}(\boldsymbol{y1}) \\
\boldsymbol{x2} &= (\boldsymbol{y2} + \boldsymbol{b})\odot\exp(\log\boldsymbol{a}) \\
\boldsymbol{x1} &= \boldsymbol{y1} \\
\boldsymbol{x} &= [\boldsymbol{x1}, \boldsymbol{x2}]
\end{align}$
Notes on affine transformation in WaveGlow:
* It is bipartite: input $\boldsymbol{y}$ of shape (B, T, P) is decomposed into $\boldsymbol{y1}$ (B, T, P/2) and $\boldsymbol{y2}$ (B, T, P/2)
* Affine transformation parameters are computed by $\text{NN}()$ with 8 WaveNet blocks
##### **Splitting the tensor**
We use `torch.chunk` https://pytorch.org/docs/stable/generated/torch.chunk.html to split a tensor (B, T, P) into a tensor (B, T, P/2) and another tensor (B, T, P/2)
```
# (B=1, T=2, P=4)
data = torch.randn([1, 2, 4])
# split along the last dimension
data1, data2 = data.chunk(2, -1)
print(data)
print(data1)
print(data2)
```
##### **WaveNet block for WaveGlow**
WaveNet block is explained in s3_demonstration_wavenet. For convenience, a special module is designed to wrap the fully-connected layers and the 8 WaveNet blocks
Notice that the last FC layer in the WaveNet block is initialized with weight 0 and bias 0. This helps the model training.
```python
tmp.weight.data.zero_()
tmp.bias.data.zero_()
```
In this notebook, we comment out the two lines for demonstration
```
class WaveNetModuleForNonAR(torch_nn.Module):
"""WaveNetModuleWaveGlow
Casecade of multiple WaveNet blocks:
x -> ExpandDim -> conv1 -> gated -> res -> conv1 -> gated -> res ...
^ |
| v
cond skip
output = sum(skip_channels)
"""
def __init__(self, input_dim, cond_dim, out_dim, n_blocks,
gate_dim, res_ch, skip_ch, kernel_size=3):
super(WaveNetModuleForNonAR, self).__init__()
self.m_block_num = n_blocks
self.m_res_ch_dim = res_ch
self.m_skip_ch_dim = skip_ch
self.m_gate_dim = gate_dim
self.m_kernel_size = kernel_size
self.m_n_blocks = n_blocks
if self.m_gate_dim % 2 != 0:
self.m_gate_dim = self.m_gate_dim // 2 * 2
# input dimension expanding
tmp = torch_nn.Conv1d(input_dim, res_ch, 1)
self.l_expand = torch_nn.utils.weight_norm(tmp, name='weight')
# end dimension compressing
tmp = torch_nn.Conv1d(skip_ch, out_dim, 1)
# Here we comment out these two lines
#tmp.weight.data.zero_()
#tmp.bias.data.zero_()
self.l_compress = tmp
# dilated convolution and residual-skip-channel transformation
self.l_conv1 = []
self.l_resskip = []
for idx in range(n_blocks):
dilation = 2 ** idx
padding = int((kernel_size * dilation - dilation)/2)
conv1 = torch_nn.Conv1d(
res_ch, gate_dim, self.m_kernel_size,
dilation = dilation, padding=padding)
conv1 = torch_nn.utils.weight_norm(conv1, name='weight')
self.l_conv1.append(conv1)
if idx < n_blocks - 1:
outdim = self.m_res_ch_dim + self.m_skip_ch_dim
else:
outdim = self.m_skip_ch_dim
resskip = torch_nn.Conv1d(self.m_gate_dim//2, outdim, 1)
resskip = torch_nn.utils.weight_norm(resskip, name='weight')
self.l_resskip.append(resskip)
self.l_conv1 = torch_nn.ModuleList(self.l_conv1)
self.l_resskip = torch_nn.ModuleList(self.l_resskip)
# a single conditional feature transformation layer
cond_layer = torch_nn.Conv1d(cond_dim, gate_dim * n_blocks, 1)
cond_layer = torch_nn.utils.weight_norm(cond_layer, name='weight')
self.l_cond = cond_layer
return
def forward(self, x, cond):
"""
"""
# input feature expansion
# change the format to (batch, dimension, length)
x_expanded = self.l_expand(x.permute(0, 2, 1))
# condition feature transformation
cond_proc = self.l_cond(cond.permute(0, 2, 1))
# skip-channel accumulation
skip_ch_out = 0
conv_input = x_expanded
for idx, (l_conv1, l_resskip) in \
enumerate(zip(self.l_conv1, self.l_resskip)):
tmp_dim = idx * self.m_gate_dim
# condition feature of this layer
cond_tmp = cond_proc[:, tmp_dim : tmp_dim + self.m_gate_dim, :]
# conv transformed
conv_tmp = l_conv1(conv_input)
# gated activation
gated_tmp = cond_tmp + conv_tmp
t_part = torch.tanh(gated_tmp[:, :self.m_gate_dim//2, :])
s_part = torch.sigmoid(gated_tmp[:, self.m_gate_dim//2:, :])
gated_tmp = t_part * s_part
# transformation into skip / residual channels
resskip_tmp = l_resskip(gated_tmp)
# reschannel
if idx == self.m_n_blocks - 1:
skip_ch_out = skip_ch_out + resskip_tmp
else:
conv_input = conv_input + resskip_tmp[:, 0:self.m_res_ch_dim, :]
skip_ch_out = skip_ch_out + resskip_tmp[:, self.m_res_ch_dim:,:]
output = self.l_compress(skip_ch_out)
# permute back to (batch, length, dimension)
return output.permute(0, 2, 1)
# input data y of shape (B=2, T=100, P=16)
input_dim = 32
y = torch.randn([2, 100, input_dim])
# up-sampled condition features of shape (B=2, T=100, P=8)
cond_dim = 8
cond_feat = torch.randn([2, 100, cond_dim])
# we should be get two tensors
# log a and b should be (B=2, T=100, P=16)
output_dim = input_dim // 2 * 2
# 8 wavenet layers
n_blocks = 8
# free to choose the dimension for gated activation, residual channel and skip channel
m_wavenetb = WaveNetModuleForNonAR(input_dim // 2, cond_dim, output_dim, n_blocks,
gate_dim=16, res_ch=16, skip_ch=16)
with torch.no_grad():
y1, y2 = y.chunk(2, -1)
loga, b = m_wavenetb(y1, cond_feat).chunk(2, -1)
print("Input feature y1 batch {:d}, length {:d}, dim {:d} ".format(*y1.shape))
print("Affine paramter log a batch {:d}, length {:d}, dim {:d} ".format(*loga.shape))
print("Affine paramter b batch {:d}, length {:d}, dim {:d} ".format(*b.shape))
```
##### **Affine transformation**
Given the parameter from WaveNet block, it is straightforward to do the transformation.
```
# forward (do the WaveNet block again)
with torch.no_grad():
y1, y2 = y.chunk(2, -1)
loga, b = m_wavenetb(y1, cond_feat).chunk(2, -1)
x2 = (y2 + b) * torch.exp(loga)
logdetjac = torch.sum(loga)
x = torch.cat([y1, x2], dim=-1)
# reverse
x1, x2 = x.chunk(2, -1)
loga, b = m_wavenetb(x1, cond_feat).chunk(2, -1)
y2 = x2 / torch.exp(loga) - b
y_reverse = torch.cat([x1, y2], dim=-1)
# The difference should be small
print(torch.std(y - y_reverse))
```
#### 1.4.3 Wrap up for one flow step
Based on the above explanation, we can wrap the flow step into more module.
Before that we define affine transformation in a module
```
class AffineCouplingWaveGlow(torch_nn.Module):
"""AffineCouplingWaveGlow
AffineCoupling block in WaveGlow
Example:
m_tmp = AffineCouplingWaveGlow(10, 10, 8, 512, 3, True, True)
data1 = torch.randn([2, 100, 10])
cond = torch.randn([2, 100, 10])
output, log_det = m_tmp(data1, cond)
data1_re = m_tmp.reverse(output, cond)
torch.std(data1 - data1_re)
"""
def __init__(self, in_dim, cond_dim,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine=True, flag_detjac=False):
"""AffineCouplingWaveGlow(in_dim, cond_dim,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine=True, flag_detjac=False)
Args:
-----
in_dim: int, dim of input audio data (batch, length, in_dim)
cond_dim, int, dim of condition feature (batch, length, cond_dim)
wn_num_conv1d: int, number of dilated conv WaveNet blocks
wn_dim_channel: int, dime of the WaveNet residual & skip channels
wn_kernel_size: int, kernel size of the dilated convolution layers
flag_affine: bool, whether use affine or additive transformation?
default True
flag_detjac: bool, whether return the determinant of Jacobian,
default False
y -> split() -> y1, y2 -> concate([y1, (y2+bias) * scale])
When flag_affine == True, y1 -> H() -> scale, bias
When flag_affine == False, y1 -> H() -> bias, scale=1
Here, H() is WaveNet blocks (dilated conv + gated activation)
"""
super(AffineCouplingWaveGlow, self).__init__()
self.flag_affine = flag_affine
self.flag_detjac = flag_detjac
if in_dim % 2 > 0:
print("AffineCoulingGlow(feat_dim), feat_dim is an odd number?!")
sys.exit(1)
if self.flag_affine:
# scale and bias
self.m_nn_outdim = in_dim // 2 * 2
else:
# only bias
self.m_nn_outdim = in_dim // 2
# WaveNet blocks (dilated conv, gated activation functions)
self.m_wn = WaveNetModuleForNonAR(
in_dim // 2, cond_dim, self.m_nn_outdim, wn_num_conv1d,
wn_dim_channel * 2, wn_dim_channel, wn_dim_channel,
wn_kernel_size
)
return
def _detjac(self, log_scale, factor=1):
# (batch, dim1, dim2, ..., feat_dim) -> (batch)
# sum over dim1, ... feat_dim
return nii_glow.sum_over_keep_batch(log_scale / factor)
def _nn_trans(self, y1, cond):
"""_nn_trans(self, y1, cond)
input
-----
y1: tensor, input feature, (batch, lengh, input_dim//2)
cond: tensor, condition feature, (batch, length, cond_dim)
output
------
scale: tensor, (batch, lengh, input_dim // 2)
bias: tensor, (batch, lengh, input_dim // 2)
log_scale: tensor, (batch, lengh, input_dim // 2)
Affine transformaiton can be done by scale * feature + bias
log_scale is used for det Jacobian computation
"""
y1_tmp = self.m_wn(y1, cond)
if self.flag_affine:
log_scale, bias = y1_tmp.chunk(2, -1)
scale = torch.exp(log_scale)
else:
bias = y1_tmp
scale = torch.ones_like(y1)
log_scale = torch.zeros_like(y1)
return scale, bias, log_scale
def forward(self, y, cond, factor=1):
"""AffineCouplingWaveGlow.forward(y, cond)
input
-----
y: tensor, input feature, (batch, lengh, input_dim)
cond: tensor, condition feature , (batch, lengh, cond_dim)
output
------
x: tensor, input feature, (batch, lengh, input_dim)
detjac: tensor, det of jacobian, (batch,)
y1, y2 = split(y)
scale, bias = WN(y1)
x2 = y2 * scale + bias or (y2 + bias) * scale
return [y1, x2]
"""
# split
y1, y2 = y.chunk(2, -1)
scale, bias, log_scale = self._nn_trans(y1, cond)
# transform
x1 = y1
x2 = (y2 + bias) * scale
# concatenate
x = torch.cat([x1, x2], dim=-1)
if self.flag_detjac:
return x, self._detjac(log_scale, factor)
else:
return x
def reverse(self, x, cond):
"""AffineCouplingWaveGlow.reverse(y, cond)
input
-----
x: tensor, input feature, (batch, lengh, input_dim)
cond: tensor, condition feature , (batch, lengh, cond_dim)
output
------
y: tensor, input feature, (batch, lengh, input_dim)
x1, x2 = split(x)
scale, bias = WN(x1)
y2 = x2 / scale - bias
return [x1, y2]
"""
# split
x1, x2 = x.chunk(2, -1)
# reverse transform
y1 = x1
scale, bias, log_scale = self._nn_trans(y1, cond)
y2 = x2 / scale - bias
return torch.cat([y1, y2], dim=-1)
```
Then one Flow step
```
class FlowStepWaveGlow(torch_nn.Module):
"""FlowStepWaveGlow
One flow step for waveglow
y -> intertical_1x1() -> AffineCoupling -> x
Example
m_tmp = FlowStepWaveGlow(10, 10, 8, 512, 3, flag_affine=True)
output, log_det = m_tmp(data1, cond)
data1_re = m_tmp.reverse(output, cond)
torch.std(data1 - data1_re)
"""
def __init__(self, in_dim, cond_dim,
wn_num_conv1d, wn_dim_channel, wn_kernel_size, flag_affine,
flag_affine_block_legacy=False):
"""FlowStepWaveGlow(in_dim, cond_dim,
wn_num_conv1d, wn_dim_channel, wn_kernel_size, flag_affine,
flag_affine_block_legacy=False)
Args
----
in_dim: int, input feature dim, (batch, length, in_dim)
cond_dim:, int, conditional feature dim, (batch, length, cond_dim)
wn_num_conv1d: int, number of 1Dconv WaveNet block in this flow step
wn_dim_channel: int, dim of the WaveNet residual and skip channels
wn_kernel_size: int, kernel size of the dilated convolution layers
flag_affine: bool, whether use affine or additive transformation?
default True
flag_affine_block_legacy, bool, whether use AffineCouplingWaveGlow or
AffineCouplingWaveGlow_legacy.
For wn_dim_channel and wn_kernel_size, see AffineCouplingWaveGlow
For flag_affine == False, scale will be 1.0
"""
super(FlowStepWaveGlow, self).__init__()
# Invertible transformation layer
#self.m_invtrans = nii_glow.InvertibleTrans(in_dim, flag_detjac=True)
self.m_invtrans = Invertible1x1ConvWaveGlow(in_dim, flag_detjac=True)
# Coupling layer
if flag_affine_block_legacy:
self.m_coupling = AffineCouplingWaveGlow_legacy(
in_dim, cond_dim, wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine, flag_detjac=True)
else:
self.m_coupling = AffineCouplingWaveGlow(
in_dim, cond_dim, wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine, flag_detjac=True)
return
def forward(self, y, cond, factor=1):
"""FlowStepWaveGlow.forward(y, cond, factor=1)
input
-----
y: tensor, input feature, (batch, lengh, input_dim)
cond: tensor, condition feature , (batch, lengh, cond_dim)
factor: int, this is used to divde likelihood, default 1
if we directly sum all detjac, they will become very large
however, we cannot average them directly on y because y
may have a different shape from the actual data y
output
------
x: tensor, input feature, (batch, lengh, input_dim)
detjac: tensor, det of jacobian, (batch,)
"""
# 1x1 transform
x_tmp, log_det_1 = self.m_invtrans(y, factor)
# coupling
x_tmp, log_det_2 = self.m_coupling(x_tmp, cond, factor)
return x_tmp, log_det_1 + log_det_2
def reverse(self, x, cond):
"""FlowStepWaveGlow.reverse(y, cond)
input
-----
x: tensor, input feature, (batch, lengh, input_dim)
cond: tensor, condition feature , (batch, lengh, cond_dim)
output
------
y: tensor, input feature, (batch, lengh, input_dim)
"""
y_tmp = self.m_coupling.reverse(x, cond)
y_tmp = self.m_invtrans.reverse(y_tmp)
return y_tmp
# Try the example again
# input data y of shape (B=2, T=100, P=16)
input_dim = 32
y = torch.randn([2, 100, input_dim])
# up-sampled condition features of shape (B=2, T=100, P=8)
cond_dim = 8
cond_feat = torch.randn([2, 100, cond_dim])
# 8 wavenet layers
n_blocks = 8
# dimension of wavenet channels (same value for res, skip and gated channels)
n_wn_dim = 64
# kernel size of conv in wavenet
n_wn_kernel_size =3
#
m_flowstep = FlowStepWaveGlow(input_dim, cond_dim, n_blocks, n_wn_dim, n_wn_kernel_size, flag_affine=True)
with torch.no_grad():
# do the affine transformation
x, log_det = m_flowstep.forward(y, cond_feat)
# do the reverse transformation
y_reversed = m_flowstep.reverse(x, cond_feat)
print("Input y batch {:d}, length {:d}, dim {:d} ".format(*y.shape))
print("x = Affine(y) batch {:d}, length {:d}, dim {:d} ".format(*x.shape))
print("y = Affine^-1(x) batch {:d}, length {:d}, dim {:d} ".format(*y_reversed.shape))
print("Log-det-Jacobian: ", log_det)
print("Difference between y and Affine^(-1)(x) is: ", end="")
# the difference should be small
print(torch.std(y_reversed - y).item())
print("\n\n")
```
By running the examples multiple times, you will see how the Log-det-Jacobian change dramatically.
This is reason to initialize the weight and bias of the last FC layer after the WaveNet blocks with zero. The affine transformation will do nothing at the beginning of the model training.
```python
tmp.weight.data.zero_()
tmp.bias.data.zero_()
```
### 1.5 WaveGlow Block
To recap, one WaveGlow block is like this:
```sh
. |-----------> log_detJac
3 | |---> early output z
| | (B, T/8, d)
=====================================================|==========
| WaveGlow block | | |
| -------------> + -----------> + ------------ + | |
| | ^ ^ ^ | |
| | | | | | |
| ----------- ----------- ----------- ----------- | | 4
1 --> |Flowstep1| -> |Flowstep2| -> |Flowstep3| -> |Flowstep4| ---|-> input to next block x
(B, T/8, P)| ----------- ----------- ----------- ----------- | (B, T/8, P-d)
| ^ ^ ^ ^ |
========|==============|==============|==============|==========
| | | |
---------------|------------------------------
2 (B, T/8, 8D)
```
where input is
1. output of previous block or squeezed waveform (if this is the 1st block), shape (B, T/8, P)
2. up-sampled and squeezed condition features, shape (B, T/8, 8D)
output is:
3. early output z, (B, T/8, d) and log_det|Jac| (scalar)
4. input to the next WaveGlow block (B, T/8, P-d)
```
class WaveGlowBlock(torch_nn.Module):
"""WaveGlowBlock
A WaveGlowBlock includes multiple steps of flow.
The Nvidia WaveGlow does not define WaveGlowBlock but directly
defines 12 flow steps. However, after every 4 flow steps, two
dimension of z will be extracted (multi-scale approach).
It is not convenient to decide when to extract z.
Here, we define a WaveGlowBlock as the casecade of multiple flow
steps, and this WaveGlowBlock can extract the two dimensions from
the output of final flow step.
Example:
data1 = torch.randn([2, 10, 10])
cond = torch.randn([2, 10, 16])
m_block = WaveGlowBlock(10, 16, 5, 8, 512, 3)
x, z, log_det = m_block(data1, cond)
data_re = m_block.reverse(x, z, cond)
print(torch.std(data_re - data1))
"""
def __init__(self, in_dim, cond_dim, n_flow_steps,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine=True,
flag_split = False,
flag_final_block=False,
split_dim = 2,
flag_affine_block_legacy=False):
"""WaveGlowBlock(in_dim, cond_dim, n_flow_steps,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine=True, flag_split = False, split_dim = 2,
flag_affine_block_legacy=False)
Args
----
in_dim: int, input feature dim, (batch, length, in_dim)
cond_dim:, int, conditional feature dim, (batch, length, cond_dim)
n_flow_steps: int, number of flow steps in one block
wn_num_conv1d: int, number of dilated conv WaveNet blocks
wn_dim_channel: int, dim of the WaveNet residual and skip channels
wn_kernel_size: int, kernel size of the dilated convolution layers
flag_affine: bool, whether use affine or additive transformation?
default True
flag_split: bool, whether split output z for multi-scale structure
default True
flag_final_block: bool, whether this block is the final block
default False
split_dim: int, if flag_split==True, z[:, :, :split_dim] will be
extracted, z[:, :, split_dim:] can be used for the next
WaveGlowBlock
flag_affine_block_legacy, bool, whether use the legacy implementation
of wavenet-based affine transformaiton layer
default False.
For wn_dim_channel and wn_kernel_size, see AffineCouplingWaveGlow
For flag_affine, see AffineCouplingWaveGlow
"""
super(WaveGlowBlock, self).__init__()
tmp_flows = []
for i in range(n_flow_steps):
tmp_flows.append(
FlowStepWaveGlow(
in_dim, cond_dim,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine, flag_affine_block_legacy))
self.m_flows = torch_nn.ModuleList(tmp_flows)
self.flag_split = flag_split
self.flag_final_block = flag_final_block
self.split_dim = split_dim
if self.flag_split and self.flag_final_block:
print("WaveGlowBlock: flag_split and flag_final_block are True")
print("This is unexpected. Please check model definition")
sys.exit(1)
if self.flag_split and self.split_dim <= 0:
print("WaveGlowBlock: split_dim should be > 0")
sys.exit(1)
return
def forward(self, y, cond, factor=1):
"""x, z, log_detjac = WaveGlowBlock(y)
y -> H() -> [z, x], log_det_jacobian
H() consists of multiple flow steps (1x1conv + AffineCoupling)
input
-----
y: tensor, (batch, length, dim)
cond, tensor, (batch, length, cond_dim)
factor, None or int, this is used to divde likelihood, default 1
output
------
log_detjac: tensor or scalar
if self.flag_split:
x: tensor, (batch, length, in_dim - split_dim),
z: tensor, (batch, length, split_dim),
else:
if self.flag_final_block:
x: None, no input to the next block
z: tensor, (batch, length, dim), for N(z; 0, I)
else:
x: tensor, (batch, length, dim),
z: None, no latent for N(z; 0, I) from this block
concate([x,z]) should have the same size as y
"""
# flows
log_detjac = 0
x_tmp = y
for l_flow in self.m_flows:
x_tmp, log_detjac_tmp = l_flow(x_tmp, cond, factor)
log_detjac = log_detjac + log_detjac_tmp
if self.flag_split:
z = x_tmp[:, :, :self.split_dim]
x = x_tmp[:, :, self.split_dim:]
else:
if self.flag_final_block:
z = x_tmp
x = None
else:
z = None
x = x_tmp
return x, z, log_detjac
def reverse(self, x, z, cond):
"""y = WaveGlowBlock.reverse(x, z, cond)
[z, x] -> H^{-1}() -> y
input
-----
if self.flag_split:
x: tensor, (batch, length, in_dim - split_dim),
z: tensor, (batch, length, split_dim),
else:
if self.flag_final_block:
x: None
z: tensor, (batch, length, in_dim)
else:
x: tensor, (batch, length, in_dim)
z: None
output
------
y: tensor, (batch, length, in_dim)
"""
if self.flag_split:
if x is None or z is None:
print("WaveGlowBlock.reverse: x and z should not be None")
sys.exit(1)
y_tmp = torch.cat([z, x], dim=-1)
else:
if self.flag_final_block:
if z is None:
print("WaveGlowBlock.reverse: z should not be None")
sys.exit(1)
y_tmp = z
else:
if x is None:
print("WaveGlowBlock.reverse: x should not be None")
sys.exit(1)
y_tmp = x
for l_flow in self.m_flows[::-1]:
# affine
y_tmp = l_flow.reverse(y_tmp, cond)
return y_tmp
```
We simply wrap the definition around the flow-step module.
To make this module to be configuratable, I considered three cases when splitting the early output
* No early output, `self.flag_split=False`. This is not used in this notebook
* Early output, and this is not the last WaveGlow block, `self.flag_split=True` and `self.flag_final_block=False`
* Early output, and this is the last WaveGlow block, `self.flag_split=False` and `self.flag_final_block=True`
| | 1 input | 2 condition | 3 early output z | 4 output latent x | flag_split | flag_final_block
|-----------------|---|---|---|---|---|---|
| WaveGlow Block1 | (B, T/8, 8) | (B, T/8, 8D) | (B, T/8, 2) | (B, T/8, 6) | True | False
| WaveGlow Block2 | (B, T/8, 6) | (B, T/8, 8D) | (B, T/8, 2) | (B, T/8, 4) | True | False
| WaveGlow Block3 | (B, T/8, 4) | (B, T/8, 8D) | (B, T/8, 4) | - | False | True
```
# Try one example
# squeezed waveform has 8 dimensions
input_dim = 8
y = torch.randn([2, 100, input_dim])
# up-sampled condition features of shape (B=2, T=100, P=8)
cond_dim = 8
cond_feat = torch.randn([2, 100, cond_dim])
# 4 flow steps
n_flowsteps = 4
n_wn_num = 8
n_wn_dim = 64
n_wn_kernel_size =3
n_early_output_dim = 2
# m_block1
m_block1 = WaveGlowBlock(input_dim, cond_dim, n_flowsteps, n_wn_num, n_wn_dim, n_wn_kernel_size, flag_affine=True,
flag_split = True, flag_final_block = False, split_dim=n_early_output_dim)
# m_block2
input_new_dim = input_dim - n_early_output_dim
m_block2 = WaveGlowBlock(input_new_dim, cond_dim, n_flowsteps, n_wn_num, n_wn_dim, n_wn_kernel_size, flag_affine=True,
flag_split = True, flag_final_block = False, split_dim=n_early_output_dim)
# m_block3
input_new_dim = input_new_dim - n_early_output_dim
m_block3 = WaveGlowBlock(input_new_dim, cond_dim, n_flowsteps, n_wn_num, n_wn_dim, n_wn_kernel_size, flag_affine=True,
flag_split = False, flag_final_block = True, split_dim=n_early_output_dim)
with torch.no_grad():
x1, z1, log_det1 = m_block1.forward(y, cond_feat)
x2, z2, log_det2 = m_block2.forward(x1, cond_feat)
x3, z3, log_det3 = m_block3.forward(x2, cond_feat)
# do the reverse transformation
x2_reverse = m_block3.reverse(x3, z3, cond_feat)
x1_reverse = m_block2.reverse(x2_reverse, z2, cond_feat)
y_reverse = m_block1.reverse(x1_reverse, z1, cond_feat)
print("Input y batch {:d}, length {:d}, dim {:d} ".format(*y.shape))
print("\nx1 from block1 batch {:d}, length {:d}, dim {:d} ".format(*x1.shape))
print("z1 from block1 batch {:d}, length {:d}, dim {:d} ".format(*z1.shape))
print("\nx2 from block2 batch {:d}, length {:d}, dim {:d} ".format(*x2.shape))
print("z2 from block3 batch {:d}, length {:d}, dim {:d} ".format(*z2.shape))
if x3 is None:
print("\nx3 from block3 is None")
print("z3 from block3 batch {:d}, length {:d}, dim {:d} ".format(*z3.shape))
print("\nDifference between y and reversed y is: ", end="")
# the difference should be small
print(torch.std(y_reverse - y).item())
print("\n\n")
```
### 1.6 WaveGlow in one Module
We can now wrap everything in a single Module
There is example code in the doc string. You can try it
```
class WaveGlow(torch_nn.Module):
"""WaveGlow
Example
cond_dim = 4
upsample = 80
num_blocks = 4
num_flows_inblock = 5
wn_num_conv1d = 8
wn_dim_channel = 512
wn_kernel_size = 3
# waveforms of length 1600
wave1 = torch.randn([2, 1600, 1])
# condition feature
cond = torch.randn([2, 1600//upsample, cond_dim])
# model
m_model = nii_waveglow.WaveGlow(
cond_dim, upsample,
num_blocks, num_flows_inblock, wn_num_conv1d,
wn_dim_channel, wn_kernel_size)
# forward computation, neg_log = -(logp + log_detjac)
# neg_log.backward() can be used for backward
z, neg_log, logp, log_detjac = m_model(wave1, cond)
# recover the signal
wave2 = m_model.reverse(z, cond)
# check difference between original wave and recovered wave
print(torch.std(wave1 - wave2))
"""
def __init__(self, cond_dim, upsample_rate,
num_blocks, num_flows_inblock,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine = True,
early_hid_dim=2,
flag_affine_block_legacy=False):
"""WaveGlow(cond_dim, upsample_rate,
num_blocks, num_flows_inblock,
wn_num_conv1d, wn_dim_channel, wn_kernel_size,
flag_affine = True,
early_hid_dim=2,
flag_affine_block_legacy=False)
Args
----
cond_dim:, int, conditional feature dim, (batch, length, cond_dim)
upsample_rate: int, up-sampling rate for condition features
num_blocks: int, number of WaveGlowBlocks
num_flows_inblock: int, number of flow steps in one WaveGlowBlock
wn_num_conv1d: int, number of 1Dconv WaveNet block in this flow step
wn_dim_channel: int, dim of the WaveNet residual and skip channels
wn_kernel_size: int, kernel size of the dilated convolution layers
flag_affine: bool, whether use affine or additive transformation?
default True
early_hid_dim: int, dimension for z_1, z_2 ... , default 2
flag_affine_block_legacy, bool, whether use the legacy implementation
of wavenet-based affine transformaiton layer
default False. The difference is on the WaveNet part
Please configure AffineCouplingWaveGlow and
AffineCouplingWaveGlow_legacy
This model defines:
cond -> upsample/squeeze -> | ------> | --------> |
v v v
y -> squeeze -> WaveGlowBlock -> WGBlock ... WGBlock -> z
|-> z_1 |-> z_2
z_1, z_2, ... are the extracted z from a multi-scale flow structure
concate([z_1, z_2, z]) is expected to be the white Gaussian noise
If early_hid_dim == 0, z_1 and z_2 will not be extracted
"""
super(WaveGlow, self).__init__()
# input is assumed to be waveform
self.m_input_dim = 1
self.m_early_hid_dim = early_hid_dim
# squeeze layer
self.m_squeeze = SqueezeForWaveGlow()
# up-sampling layer
#self.m_upsample = nii_nn.UpSampleLayer(cond_dim, upsample_rate, True)
self.m_upsample = upsampleByTransConv(cond_dim, upsample_rate)
# wavenet-based flow blocks
# squeezed input dimension
squeezed_in_dim = self.m_input_dim * self.m_squeeze.get_squeeze_factor()
# squeezed condition feature dimension
squeezed_cond_dim = cond_dim * self.m_squeeze.get_squeeze_factor()
# save the dimension for get_z_noises
self.m_feat_dim = []
# define blocks
tmp_squeezed_in_dim = squeezed_in_dim
tmp_flow_blocks = []
for i in range(num_blocks):
# if this is not the last block and early_hid_dim >0
flag_split = (i < (num_blocks-1)) and early_hid_dim > 0
flag_final_block = i == (num_blocks-1)
# save the dimension for get_z_noises
if flag_final_block:
self.m_feat_dim.append(tmp_squeezed_in_dim)
else:
self.m_feat_dim.append(early_hid_dim if flag_split else 0)
tmp_flow_blocks.append(
WaveGlowBlock(
tmp_squeezed_in_dim, squeezed_cond_dim, num_flows_inblock,
wn_num_conv1d, wn_dim_channel, wn_kernel_size, flag_affine,
flag_split = flag_split, flag_final_block=flag_final_block,
split_dim = early_hid_dim,
flag_affine_block_legacy = flag_affine_block_legacy))
# multi-scale approach will extract a few dimensions for next flow
# thus, input dimension to the next block will be this
tmp_squeezed_in_dim = tmp_squeezed_in_dim - early_hid_dim
self.m_flowblocks = torch_nn.ModuleList(tmp_flow_blocks)
# done
return
def _normal_lh(self, noise):
# likelihood of normal distribution on the given noise
return -0.5 * np.log(2 * np.pi) - 0.5 * noise ** 2
def forward(self, y, cond):
"""z, neg_logp_y, logp_z, logdet = WaveGlow.forward(y, cond)
cond -> upsample/squeeze -> | ------> | --------> |
v v v
y -> squeeze -> WaveGlowBlock -> WGBlock ... WGBlock -> z
|-> z_1 |-> z_2
input
-----
y: tensor, (batch, waveform_length, 1)
cond: tensor, (batch, cond_length, 1)
output
------
z: list of tensors, [z_1, z_2, ... ,z ] in figure above
neg_logp_y: scalar, - log p(y)
logp_z: scalar, -log N(z), summed over one data sequence, but averaged
over batch.
logdet: scalar, -|det dH(.)/dy|, summed over one data sequence,
but averaged
over batch.
If self.early_hid_dim == 0, z_1, z_2 ... will be None
"""
# Rather than summing the likelihood and divide it by the number of
# data in the final step, we divide this factor from the likelihood
# caculating by each flow step and sum the scaled likelihood.
# Two methods are equivalent, but the latter may prevent numerical
# overflow of the likelihood value for long sentences
factor = np.prod([dim for dim in y.shape])
# waveform squeeze (batch, squeezed_length, squeezed_dim)
y_squeezed = self.m_squeeze(y)
squeezed_dim = y_squeezed.shape[-1]
# condition feature upsampling and squeeze
# (batch, squeezed_length, squeezed_dim_cond)
cond_up_squeezed = self.m_squeeze(self.m_upsample(cond))
# flows
z_bags = []
log_detjac = 0
log_pz = 0
x_tmp = y_squeezed
for m_block in self.m_flowblocks:
x_tmp, z_tmp, log_detjac_tmp = m_block(
x_tmp, cond_up_squeezed, factor)
# accumulate log det jacobian
log_detjac += log_detjac_tmp
# compute N(z; 0, I)
# save z_tmp (even if it is None)
z_bags.append(z_tmp)
# accumulate log_N(z; 0, I) only if it is valid
if z_tmp is not None:
log_pz += nii_glow.sum_over_keep_batch2(
self._normal_lh(z_tmp), factor)
# average over batch and data points
neg_logp_y = -(log_pz + log_detjac).sum()
return z_bags, neg_logp_y, \
log_pz.sum(), log_detjac.sum()
def reverse(self, z_bags, cond):
"""y = WaveGlow.reverse(z_bags, cond)
cond -> upsample/squeeze -> | ------> | --------> |
v v v
y <- unsqueeze <- WaveGlowBlock -> WGBlock ... WGBlock <- z
|<- z_1 |<- z_2
input
-----
z: list of tensors, [z_1, z_2, ... ,z ] in figure above
cond: tensor, (batch, cond_length, 1)
output
------
y: tensor, (batch, waveform_length, 1)
If self.early_hid_dim == 0, z_1, z_2 ... should be None
"""
# condition feature upsampling and squeeze
# (batch, squeezed_length, squeezed_dim_cond)
cond_up_sqe = self.m_squeeze(self.m_upsample(cond))
# initial
y_tmp = None
for z, m_block in zip(z_bags[::-1], self.m_flowblocks[::-1]):
y_tmp = m_block.reverse(y_tmp, z, cond_up_sqe)
y = self.m_squeeze.reverse(y_tmp)
return y
def get_z_noises(self, length, noise_std=0.7, batchsize=1):
"""z_bags = WaveGlow.get_z_noises(length, noise_std=0.7, batchsize=1)
Return a list of random noises for random sampling
input
-----
length: int, length of target waveform (without squeeze)
noise_std: float, std of Gaussian noise, default 0.7
batchsize: int, batch size of this random data, default 1
output
------
z_bags: list of tensors
Shape of tensor in z_bags is decided by WaveGlow configuration.
WaveGlow.reverse(z_bags, cond) can be used to generate waveform
"""
squeeze_length = self.m_squeeze.get_expected_squeeze_length(length)
device = next(self.parameters()).device
z_bags = []
# generate the z for each WaveGlowBlock
for feat_dim in self.m_feat_dim:
if feat_dim is not None and feat_dim > 0:
z_tmp = torch.randn(
[batchsize, squeeze_length, feat_dim],
dtype=nii_io_conf.d_dtype,
device=device)
z_bags.append(z_tmp * noise_std)
else:
z_bags.append(None)
return z_bags
```
In the above API, we have a `get_z_noises` method to sample a bag of random noises $(z_1, z_2, z_3)$ given a required waveform length.
This is handy during generation: we only don't need to care about dimension of the early output z. The API will handle that for us.
## 2. Using pre-trained model for waveform generation
Here we load pre-trained Wavenet_v1 model and generate a sample.
### 2.1 Meta-Model wrapper
For my pytorch script, I further wrap around the `WaveGlow` with a meta-model Module. This is convenient for methods irrelavant to a specific model, for example, input and output feature normalization.
This is merely a practical choise. Nothing new for WaveGlow.
```
# just for convenience
class PrjConfig:
def __init__(self, wav_samp_rate, up_sample_rate):
self.wav_samp_rate = wav_samp_rate
self.input_reso = [up_sample_rate]
#
class Model(torch_nn.Module):
""" Model definition
"""
def __init__(self, in_dim, out_dim, args, prj_conf, mean_std=None):
super(Model, self).__init__()
#################
## must-have
#################
# mean std of input and output
in_m, in_s, out_m, out_s = self.prepare_mean_std(in_dim,out_dim,\
args, mean_std)
self.input_mean = torch_nn.Parameter(in_m, requires_grad=False)
self.input_std = torch_nn.Parameter(in_s, requires_grad=False)
self.output_mean = torch_nn.Parameter(out_m, requires_grad=False)
self.output_std = torch_nn.Parameter(out_s, requires_grad=False)
self.input_dim = in_dim
self.output_dim = out_dim
# a flag for debugging (by default False)
self.model_debug = False
#################
## model config
#################
# waveform sampling rate
self.sample_rate = prj_conf.wav_samp_rate
# up-sample rate
self.up_sample = prj_conf.input_reso[0]
# configuration for WaveGlow
self.num_waveglow_blocks = 3
self.num_flow_steps_perblock = 4
self.num_wn_blocks_perflow = 8
self.num_wn_channel_size = 256
self.num_wn_conv_kernel = 3
self.flag_affine = True
self.early_z_feature_dim = 2
self.flag_affine_legacy_implementation = False
self.m_waveglow = WaveGlow(
in_dim, self.up_sample,
self.num_waveglow_blocks,
self.num_flow_steps_perblock,
self.num_wn_blocks_perflow,
self.num_wn_channel_size,
self.num_wn_conv_kernel,
self.flag_affine,
self.early_z_feature_dim,
self.flag_affine_legacy_implementation)
# done
return
def prepare_mean_std(self, in_dim, out_dim, args, data_mean_std=None):
"""
"""
if data_mean_std is not None:
in_m = torch.from_numpy(data_mean_std[0])
in_s = torch.from_numpy(data_mean_std[1])
out_m = torch.from_numpy(data_mean_std[2])
out_s = torch.from_numpy(data_mean_std[3])
if in_m.shape[0] != in_dim or in_s.shape[0] != in_dim:
print("Input dim: {:d}".format(in_dim))
print("Mean dim: {:d}".format(in_m.shape[0]))
print("Std dim: {:d}".format(in_s.shape[0]))
print("Input dimension incompatible")
sys.exit(1)
if out_m.shape[0] != out_dim or out_s.shape[0] != out_dim:
print("Output dim: {:d}".format(out_dim))
print("Mean dim: {:d}".format(out_m.shape[0]))
print("Std dim: {:d}".format(out_s.shape[0]))
print("Output dimension incompatible")
sys.exit(1)
else:
in_m = torch.zeros([in_dim])
in_s = torch.ones([in_dim])
out_m = torch.zeros([out_dim])
out_s = torch.ones([out_dim])
return in_m, in_s, out_m, out_s
def normalize_input(self, x):
""" normalizing the input data
"""
return (x - self.input_mean) / self.input_std
def normalize_target(self, y):
""" normalizing the target data
"""
return (y - self.output_mean) / self.output_std
def denormalize_output(self, y):
""" denormalizing the generated output from network
"""
return y * self.output_std + self.output_mean
def forward(self, input_feat, wav):
"""loss = forward(self, input_feat, wav)
input
-----
input_feat: tensor, input features (batchsize, length1, input_dim)
wav: tensor, target waveform (batchsize, length2, 1)
it should be raw waveform, flot valued, between (-1, 1)
the code will do mu-law conversion
output
------
loss: tensor / scalar
Note: returned loss can be directly used as the loss value
no need to write Loss()
"""
# normalize conditiona feature
#input_feat = self.normalize_input(input_feat)
# compute
z_bags, neg_logp, logp_z, log_detjac = self.m_waveglow(wav, input_feat)
return neg_logp
def inference(self, input_feat):
"""wav = inference(mels)
input
-----
input_feat: tensor, input features (batchsize, length1, input_dim)
output
------
wav: tensor, target waveform (batchsize, length2, 1)
Note: length2 will be = length1 * self.up_sample
"""
#normalize input
#input_feat = self.normalize_input(input_feat)
length = input_feat.shape[1] * self.up_sample
noise = self.m_waveglow.get_z_noises(length, noise_std=0.6,
batchsize=input_feat.shape[0])
output = self.m_waveglow.reverse(noise, input_feat)
return output
# sampling rate of waveform (Hz)
sampling_rate = 16000
# up-sampling rate for the pre-trained model is 80
upsampe_rate = 80
prj_config = PrjConfig(16000, 80)
# input feature dim (80 dimension Mel-spec)
mel_dim = 80
input_dim = mel_dim
# output dimension = 1 for waveform
output_dim = 1
# declare the model
m_waveglow = Model(input_dim, output_dim, None, prj_config)
# load pre-trained model
device=torch.device("cpu")
m_waveglow.to(device, dtype=torch.float32)
pretrained_file = "data_models/pre_trained_waveglow/__pre_trained/trained_network.pt"
if os.path.isfile(pretrained_file):
checkpoint = torch.load(pretrained_file, map_location="cpu")
m_waveglow.load_state_dict(checkpoint)
else:
print("Cannot find pre-trained model {:s}".format(pretrained_file))
print("Please run 00_download_model.sh and download the pre-trained model")
```
### 2.2 Load waveform, extract feature, and generate waveform
Let's use CPU.
#### Extract input feature
```
# If you want to do copy-synthesis
# The tools to extract are in data_models/scripts
import sys
import os
import data_models.scripts.sub_get_mel as nii_mel_tk
sampling_rate = 16000
fftl_n = 1024
frame_length = 400
# This must be compatible with the WaveNet up-sampling rate configuration
frame_shift = 80
frame_length_in_ms = int(frame_length*1000/sampling_rate)
frame_shift_in_ms = int(frame_shift*1000/sampling_rate)
# use waveform from this folder
input_waveform_path = "data_models/acoustic_features/hn_nsf/slt_arctic_b0474.wav"
input_mel = nii_mel_tk.get_melsp(input_waveform_path, fftl=fftl_n, fl=frame_length, fs=frame_shift)
# If you have problem running the code above, please just load the pre-extracted features
#input_mel = tool_lib.read_raw_mat("data_models/acoustic_features/hn_nsf/slt_arctic_b0474.mfbsp", mel_dim)
# trim the mel for quick demonstration
#input_mel = input_mel[:290, :]
print("Input Mel shape:" + str(input_mel.shape))
# compose the input tensor
input_tensor = torch.tensor(input_mel, dtype=torch.float32).unsqueeze(0)
print("Input data tensor shape:" + str(input_tensor.shape))
```
#### Generate waveform
This may be slow because WaveGlow requires a huge amount of computation.
Please be patient! If it is too slow, please reduce the length of input mel-spec further.
```
with torch.no_grad():
output_waveform = m_waveglow.inference(input_tensor)
import IPython.display
IPython.display.Audio(output_waveform[0, :, 0].numpy(), rate=sampling_rate, normalize=False)
# plot the spectrogram of the generated waveform
plot_API.plot_API(output_waveform[0, :, 0].numpy(), plot_lib.plot_spec, 'single')
```
You may also try other waveforms in `data_models/acoustic_features/hn_nsf`.
Note that waveforms I used to train this WaveNet have normalized amplitude. The normalization tool is the sv56 https://github.com/openitu/STL. If you try part3 of this tutorial and run the script `../project/01-nsf/*/00_demo.sh`, you will download the normalized waveforms of CMU-arctic. There will also be a script to use sv56 `../project/01-nsf/DATA/cmu-arctic-data-set/scripts/wav`. But please compile the sv56 and sox first.
## Final note
Project to train a new WaveNet using CMU-arctic database is available in `../project/05-nn-vocoders/waveglow`. The pre-trained model was trained on CMU arctic data, which may not be sufficient for WaveGlow. You may try the script on other database. Furthermore, no post-processing is included in this notebook.
There is another variant `../project/05-nn-vocoders/waveglow-2` that uses a legacy version of WaveNet block for WaveGlow. You may also try that model as well.
That's all
| github_jupyter |
```
from google.colab import auth
auth.authenticate_user()
!gcloud config set project clever-aleph-203411
# Download the file from a given Google Cloud Storage bucket.
!gsutil cp gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/datasetv5.h5 dataset.h5
!mkdir models
import keras
from keras.models import Sequential, model_from_json, load_model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras.utils import np_utils
from keras.callbacks import EarlyStopping, ModelCheckpoint, Callback, ReduceLROnPlateau
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import numpy as np
import pickle
import h5py
f = h5py.File('dataset.h5', 'r')
x_train = f['x_train'][:]
y_train = f['y_train'][:]
x_test = f['x_test'][:-1300]
y_test = f['y_test'][:-1300]
f.close()
with h5py.File('dataset.h5', 'r') as f:
x = f['x'][:]
y = f['y'][:]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)
del x
del y
batch_size = 32
epochs = 300
class SaveModel(Callback):
def __init__(self, period=10):
self.filepath = "/content/models/dbp-{epoch:02d}-{val_loss:.2f}.h5"
self.gcsfile = "gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/models/v12/dbp-{epoch:02d}-{val_loss:.2f}.h5"
self.loss = None
self.old_epoch = 0
self.period = period
def on_train_begin(self, logs={}):
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
if(self.loss == None):
self.loss = logs.get('val_loss')
return
else:
if(self.loss > logs.get('val_loss') and (epoch-self.old_epoch)>self.period):
self.old_epoch = epoch
filepath = self.filepath.format(epoch=epoch + 1, **logs)
gcsfile = self.gcsfile.format(epoch=epoch + 1, **logs)
self.model.save_weights(filepath, overwrite=True)
!gsutil cp {filepath} {gcsfile}
self.loss = logs.get('val_loss')
return
model = Sequential()
model.add(Conv2D(4, (3, 4), input_shape=(48, 640, 1), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Conv2D(8, (3, 4), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(16, (3, 8), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Conv2D(32, (3, 8), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 16), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Conv2D(128, (3, 16), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(2, activation='linear'))
model = Sequential()
model.add(Conv2D(4, (3, 2), input_shape=(48, 640, 1), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(8, (3, 4), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(16, (3, 8), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 16), activation='relu', padding='same'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones'))
model.add(Dense(2, activation='linear'))
model.summary()
# initiate optimizer
opt = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#opt = keras.optimizers.RMSprop(lr=0.001)
model.compile(loss='mse', optimizer=opt, metrics=['mae'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=10, restore_best_weights=True)
#sm = SaveModel()
rp = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=4, verbose=1, mode='min', min_delta=0.0001, cooldown=0, min_lr=0)
model_json = model.to_json()
with open("/content/models/model.json", "w") as json_file:
json_file.write(model_json)
!gsutil cp '/content/models/model.json' 'gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/models/v12-bn/sbp_model-zp.json'
print(model_json)
```
ascending order conv layers
```
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.2, shuffle=True, verbose=1, callbacks=[es, rp])
with open('models/history.pickle', 'wb') as f:
pickle.dump(history, f)
!gsutil cp '/content/models/history.pickle' 'gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/models/v04/history01.pickle'
model.save('/content/models/best.h5')
!gsutil cp '/content/models/best.h5' 'gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/models/v04/model01.h5'
!gsutil cp -r gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/models/v02 /content
!gsutil cp -r gs://colab-bucket-f1618a8a-3e8c-11e9-ac44-0242ac1c0002/f1.npy /content
model = load_model('/content/v02/model02.h5')
h = model.predict(x_test)
model.evaluate(x_test, y_test)
err = h- y_test.reshape((1946,2))
err.shape
import matplotlib.pyplot as plt
plt.hist(err[:,1], bins=np.arange(min(err[:,0]), max(err[:,0]) + 1, 1))
plt.show()
import matplotlib.pyplot as plt
plt.hist(err[:,0], bins=np.arange(min(err[:,0]), max(err[:,0]) + 1, 1))
plt.show()
print(model.to_json())
import matplotlib.pyplot as plt
plt.hist(y_test[:,1], bins=np.arange(min(y_test[:,1]), max(y_test[:,1]) + 1, 1))
plt.show()
model.summary()
import random
k = []
for i in range(10):
k.append(random.randint(1,1200))
print(k)
xx = []
for i in k:
xx.append(x_test[i])
import numpy as np
xx = np.asarray(xx)
yy = model.predict(xx)
y = []
for i in k:
y.append(y_test[i])
a = yy-y
y
a = [1,2,6,-9,8,-2,-2,3,1,-1]
b = [1,3,-7,-4,2,5,-1,4,-2,2]
np.mean(np.absolute(b))
import matplotlib.pyplot as plt
plt.hist(b, bins=np.arange(-10, 10 + 1, 1))
plt.show()
import pickle
a = pickle.load(open('/content/v02/history02.pickle', 'rb'))
a.history
plt.plot(a.history['val_mean_absolute_error'])
plt.show()
import numpy as np
f = np.load('/content/f1.npy')
data = inp.reshape((1,48,640,1))
model.predict(data)
```
| github_jupyter |
## 3.4 CartPoleをQ学習で制御
```
# パッケージのimport
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
# 動画の描画関数の宣言
# 参考URL http://nbviewer.jupyter.org/github/patrickmineault
# /xcorr-notebooks/blob/master/Render%20OpenAI%20gym%20as%20GIF.ipynb
from JSAnimation.IPython_display import display_animation
from matplotlib import animation
from IPython.display import display
def display_frames_as_gif(frames):
"""
Displays a list of frames as a gif, with controls
"""
plt.figure(figsize=(frames[0].shape[1]/72.0, frames[0].shape[0]/72.0),
dpi=72)
patch = plt.imshow(frames[0])
plt.axis('off')
def animate(i):
patch.set_data(frames[i])
anim = animation.FuncAnimation(plt.gcf(), animate, frames=len(frames),
interval=50)
anim.save('movie_cartpole.mp4') # 動画のファイル名と保存です
display(display_animation(anim, default_mode='loop'))
# 定数の設定
ENV = 'CartPole-v0' # 使用する課題名
NUM_DIZITIZED = 6 # 各状態の離散値への分割数
GAMMA = 0.99 # 時間割引率
ETA = 0.5 # 学習係数
MAX_STEPS = 200 # 1試行のstep数
NUM_EPISODES = 1000 # 最大試行回数
class Agent:
'''CartPoleのエージェントクラスです、棒付き台車そのものになります'''
def __init__(self, num_states, num_actions):
self.brain = Brain(num_states, num_actions) # エージェントが行動を決定するための頭脳を生成
def update_Q_function(self, observation, action, reward, observation_next):
'''Q関数の更新'''
self.brain.update_Q_table(
observation, action, reward, observation_next)
def get_action(self, observation, step):
'''行動の決定'''
action = self.brain.decide_action(observation, step)
return action
class Brain:
'''エージェントが持つ脳となるクラスです、Q学習を実行します'''
def __init__(self, num_states, num_actions):
self.num_actions = num_actions # CartPoleの行動(右に左に押す)の2を取得
# Qテーブルを作成。行数は状態を分割数^(4変数)にデジタル変換した値、列数は行動数を示す
self.q_table = np.random.uniform(low=0, high=1, size=(
NUM_DIZITIZED**num_states, num_actions))
def bins(self, clip_min, clip_max, num):
'''観測した状態(連続値)を離散値にデジタル変換する閾値を求める'''
return np.linspace(clip_min, clip_max, num + 1)[1:-1]
def digitize_state(self, observation):
'''観測したobservation状態を、離散値に変換する'''
cart_pos, cart_v, pole_angle, pole_v = observation
digitized = [
np.digitize(cart_pos, bins=self.bins(-2.4, 2.4, NUM_DIZITIZED)),
np.digitize(cart_v, bins=self.bins(-3.0, 3.0, NUM_DIZITIZED)),
np.digitize(pole_angle, bins=self.bins(-0.5, 0.5, NUM_DIZITIZED)),
np.digitize(pole_v, bins=self.bins(-2.0, 2.0, NUM_DIZITIZED))
]
return sum([x * (NUM_DIZITIZED**i) for i, x in enumerate(digitized)])
def update_Q_table(self, observation, action, reward, observation_next):
'''QテーブルをQ学習により更新'''
state = self.digitize_state(observation) # 状態を離散化
state_next = self.digitize_state(observation_next) # 次の状態を離散化
Max_Q_next = max(self.q_table[state_next][:])
self.q_table[state, action] = self.q_table[state, action] + \
ETA * (reward + GAMMA * Max_Q_next - self.q_table[state, action])
def decide_action(self, observation, episode):
'''ε-greedy法で徐々に最適行動のみを採用する'''
state = self.digitize_state(observation)
epsilon = 0.5 * (1 / (episode + 1))
if epsilon <= np.random.uniform(0, 1):
action = np.argmax(self.q_table[state][:])
else:
action = np.random.choice(self.num_actions) # 0,1の行動をランダムに返す
return action
class Environment:
'''CartPoleを実行する環境のクラスです'''
def __init__(self):
self.env = gym.make(ENV) # 実行する課題を設定
num_states = self.env.observation_space.shape[0] # 課題の状態の数4を取得
num_actions = self.env.action_space.n # CartPoleの行動(右に左に押す)の2を取得
self.agent = Agent(num_states, num_actions) # 環境内で行動するAgentを生成
def run(self):
'''実行'''
complete_episodes = 0 # 195step以上連続で立ち続けた試行数
is_episode_final = False # 最終試行フラグ
frames = [] # 動画用に画像を格納する変数
for episode in range(NUM_EPISODES): # 試行数分繰り返す
observation = self.env.reset() # 環境の初期化
for step in range(MAX_STEPS): # 1エピソードのループ
if is_episode_final is True: # 最終試行ではframesに各時刻の画像を追加していく
frames.append(self.env.render(mode='rgb_array'))
# 行動を求める
action = self.agent.get_action(observation, episode)
# 行動a_tの実行により、s_{t+1}, r_{t+1}を求める
observation_next, _, done, _ = self.env.step(
action) # rewardとinfoは使わないので_にする
# 報酬を与える
if done: # ステップ数が200経過するか、一定角度以上傾くとdoneはtrueになる
if step < 195:
reward = -1 # 途中でこけたら罰則として報酬-1を与える
complete_episodes = 0 # 195step以上連続で立ち続けた試行数をリセット
else:
reward = 1 # 立ったまま終了時は報酬1を与える
complete_episodes += 1 # 連続記録を更新
else:
reward = 0 # 途中の報酬は0
# step+1の状態observation_nextを用いて,Q関数を更新する
self.agent.update_Q_function(
observation, action, reward, observation_next)
# 観測の更新
observation = observation_next
# 終了時の処理
if done:
print('{0} Episode: Finished after {1} time steps'.format(
episode, step + 1))
break
if is_episode_final is True: # 最終試行では動画を保存と描画
display_frames_as_gif(frames)
break
if complete_episodes >= 10: # 10連続成功なら
print('10回連続成功')
is_episode_final = True # 次の試行を描画を行う最終試行とする
# main
cartpole_env = Environment()
cartpole_env.run()
```
| github_jupyter |
# Jupyter Notebook to Merge COVID Related Data From Multiple Data Sources
#### _Work done by Nepal Poverty Team, The World Bank_
## Data Sources:
1. [Google Community Mobility Reports](https://www.google.com/covid19/mobility/)
2. [The Oxford COVID-19 Government Response Tracker](https://www.bsg.ox.ac.uk/research/research-projects/coronavirus-government-response-tracker)
3. [Our World in Data](https://ourworldindata.org/coronavirus)
4. [World Bank's list of economies](https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups)
We have used Python 3 and produced the Python 3 Jupyter notebook showing data cleaning and merging.
## Setup
Running of this notebook requires Jupyter software system. Either Jupyter notebook or Jupyter lab can be installed on the system. In addition, two additional Python packages -- pycountry and pandas -- are required.
### Jupyter Software Installation
https://jupyter.org/install
### pycountry Package Installation
https://pypi.org/project/pycountry/
### pandas Package Installation
https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html
After all the dependencies are installed the notebook can be imported to the Jupyter software and run.
## Imports
```
import time
import datetime
import requests
import pycountry
import pandas as pd
```
## Save the column names of latest CSV file in a list
```
latest_CSV_file = 'GCMR_OWID_OxCGRT_WB_1593014927.5399244.csv'
columns = pd.read_csv(latest_CSV_file, nrows=1).columns.tolist()
drop_columns = set(['OXCGRT_Date', 'OXCGRT_CountryCode', 'OWID_iso_code', 'OWID_date', 'GCMR_country_region_code', 'GCMR_date'])
```
## Data extraction from URLs
```
# getting data from the web URLs
google_url = "https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv"
google_data = pd.read_csv(google_url)
print("Google mobility data fetched.")
owid_url = 'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv'
owid_data = pd.read_csv(owid_url)
print("OWID data fetched.")
oxford_url = 'https://raw.githubusercontent.com/OxCGRT/covid-policy-tracker/master/data/OxCGRT_latest.csv'
oxford_data = pd.read_csv(oxford_url)
print("OxCGRT data fetched.")
```
## Data cleaning
We saw country region code for Namibia missing in the Google mobility data, so we add the country code for Namibia. Similarly, we add `INTL` as country code for International numbers. They are the numbers not belonging to any country.
```
# Add 'NAM' as country_region_code for Namibia
google_data.loc[google_data[google_data['country_region'] == 'Namibia'].index, 'country_region_code'] = "NA"
assert google_data['country_region_code'].isnull().sum() == 0
# Add INTL as international ISO code for owid_data
owid_data.loc[owid_data[owid_data['location'] == 'International'].index, 'iso_code'] = 'INTL'
assert owid_data['iso_code'].isnull().sum() == 0
# also assert oxford_data does not have any null country codes
assert oxford_data['CountryCode'].isnull().sum() == 0
```
Let's prepend the column names by `GCMR_`, `OWID_` and `OXCGRT_` for Google mobility data, OWID data and Oxford policy tracker data respectively. This helps us to distinguish the source of the columns.
```
# GCMR for Google Community Mobility Report
google_data.columns = ['GCMR_' + i for i in google_data.columns]
# OWID for Our World in Development
owid_data.columns = ['OWID_' + i for i in owid_data.columns]
# OXCGRT for Oxford COVID-19 Government Response Tracker
oxford_data.columns = ['OXCGRT_' + i for i in oxford_data.columns]
```
### Column names validation
```
# raise error if any of the different column name has been used in any of the data or new column has been introduced
# this should be fixed manually seeing the different or new column name
assert set(google_data.columns).union(set(owid_data.columns)).union(set(oxford_data.columns)) - set(columns).union(drop_columns) == set()
google_data.shape, owid_data.shape, oxford_data.shape
```
Let's bring date of Oxford policy tracker data to the desired, compatible date format.
```
oxford_data['OXCGRT_Date'] = oxford_data['OXCGRT_Date'].apply(lambda x: str(x)).apply(lambda x: x[:4] + '-' + x[4:6] + '-' + x[6:])
```
## Outer merging of Oxford data, OWID data, and Google mobility data on country and date
```
new_df = pd.merge(oxford_data, owid_data, how='outer', left_on=['OXCGRT_CountryCode', 'OXCGRT_Date'], right_on = ['OWID_iso_code', 'OWID_date'])
new_df.shape
```
Let's create a function <i>get_a_or_b</i> which gets either <i>a</i> or <i>b</i>, depending on which value is non-null.
```
def get_a_or_b(row, a, b):
row = row.fillna('')
if row[a]:
return row[a]
elif row[b]:
return row[b]
```
Apply <i>get_a_or_b</i> to ISO codes and dates and save them in the two columns, `iso_code` and `date`. It helps to get the non-null columns, `iso_code` and `date`.
```
new_df['iso_code'] = new_df.apply(get_a_or_b, args=('OXCGRT_CountryCode', 'OWID_iso_code'), axis=1)
new_df['date'] = new_df.apply(get_a_or_b, args=('OXCGRT_Date', 'OWID_date'), axis=1)
```
Now, let's delete the columns we applied the function to.
```
new_df.drop(['OXCGRT_Date', 'OXCGRT_CountryCode', 'OWID_iso_code', 'OWID_date'], axis=1, inplace=True)
```
We see two different set of country codes being used across the datasets -- two character and three character country codes (Example: NP and NPL for Nepal). Let's bring uniformity by converting the two character country codes to three character country codes using `pycountry` Python package.
```
google_data['GCMR_country_region_code'] = google_data['GCMR_country_region_code'].apply(lambda x: pycountry.countries.get(alpha_2=x).alpha_3)
```
Let's now outer merge the above merged data with google data on country code and date.
```
final_df = pd.merge(google_data, new_df, how='outer', left_on=['GCMR_country_region_code', 'GCMR_date'], right_on = ['iso_code', 'date'])
```
Apply <i>get_a_or_b</i> to ISO codes and dates and assign them in the two columns, `iso_code` and `date`. It helps to get the non-null columns, `iso_code` and `date` -- same operation done as above.
```
final_df['iso_code'] = final_df.apply(get_a_or_b, args=('GCMR_country_region_code', 'iso_code'), axis=1)
final_df['date'] = final_df.apply(get_a_or_b, args=('GCMR_date', 'date'), axis=1)
```
Now, let's delete the columns we applied the function to.
```
final_df.drop(['GCMR_country_region_code', 'GCMR_date'], axis=1, inplace=True)
```
Similarly, let's have one single column `Country` which represents the country the data row belongs to. As done above, we apply the <i>get_a_or_b</i> function two times.
```
final_df['Country'] = final_df.apply(get_a_or_b, args=('GCMR_country_region', 'OXCGRT_CountryName'), axis=1)
final_df['Country'] = final_df.apply(get_a_or_b, args=('Country', 'OWID_location'), axis=1)
```
## Merge with WB's `List of Economies` data
```
economies = pd.read_excel('CLASS (1).xls', sheet_name='List of economies')
```
### Preprocess the data
```
economies.columns = economies.iloc[3, :]
economies = economies.iloc[5:223, :]
economies = economies[['Economy', 'Code', 'Region', 'Income group', 'Lending category', 'Other']]
economies['Lending category'] = economies['Lending category'].replace(to_replace='\.\.', value='', regex=True)
```
Prepend the columns with `WB_` to identify the columns coming from WB's `List of Economies` data.
```
economies.columns = ['WB_' + i for i in economies.columns]
```
### Merge `final_df` and WB's economies data on country codes.
```
merged_df = pd.merge(final_df, economies, how='outer', left_on=['iso_code'], right_on = ['WB_Code'])
```
Apply the same function as above and assign the country name information in `Country` column.
```
merged_df['Country'] = merged_df.apply(get_a_or_b, args=('Country', 'WB_Economy'), axis=1)
```
## Add timestamp
```
merged_df['Timestamp (UTC)'] = datetime.datetime.utcnow().__str__()
```
## Export to CSV file
```
merged_df.to_csv('~/OneDrive/WB/COVID/GCMR_OWID_OxCGRT_WB_{}.csv'.format(int(time.time())), index=False)
```
| github_jupyter |
# Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️"
You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
Lets get started! Run the following cell to load the package you are going to use.
```
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
```
## 1 - Baseline model: Emojifier-V1
### 1.1 - Dataset EMOJISET
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence
<img src="images/data_set.png" style="width:700px;height:300px;">
<caption><center> **Figure 1**: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
```
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
```
Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change `index` to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
```
index = 1
print(X_train[index], label_to_emoji(Y_train[index]))
```
### 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width:900px;height:300px;">
<caption><center> **Figure 2**: Baseline model (Emojifier-V1).</center></caption>
</center>
The input of the model is a string corresponding to a sentence (e.g. "I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.
To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, `Y_oh` stands for "Y-one-hot" in the variable names `Y_oh_train` and `Y_oh_test`:
```
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
```
Let's see what `convert_to_one_hot()` did. Feel free to change `index` to print out different values.
```
index = 50
print(Y_train[index], "is converted into one hot", Y_oh_train[index])
```
All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
### 1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the `word_to_vec_map`, which contains all the vector representations.
```
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `word_to_index`: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- `index_to_word`: dictionary mapping from indices to their corresponding words in the vocabulary
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
Run the following cell to check if it works.
```
word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
```
**Exercise**: Implement `sentence_to_avg()`. You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. `X.lower()` and `X.split()` might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values.
```
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
"""
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50,))
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg / len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg)
```
**Expected Output**:
<table>
<tr>
<td>
**avg= **
</td>
<td>
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
</td>
</tr>
</table>
#### Model
You now have all the pieces to finish implementing the `model()` function. After using `sentence_to_avg()` you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters.
**Exercise**: Implement the `model()` function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:
$$ z^{(i)} = W . avg^{(i)} + b$$
$$ a^{(i)} = softmax(z^{(i)})$$
$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$
It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time.
We provided you a function `softmax()`.
```
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.matmul(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(Y_oh[i] * np.log(a))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
```
Run the next cell to train your model and learn the softmax parameters (W,b).
```
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
```
**Expected Output** (on a subset of iterations):
<table>
<tr>
<td>
**Epoch: 0**
</td>
<td>
cost = 1.95204988128
</td>
<td>
Accuracy: 0.348484848485
</td>
</tr>
<tr>
<td>
**Epoch: 100**
</td>
<td>
cost = 0.0797181872601
</td>
<td>
Accuracy: 0.931818181818
</td>
</tr>
<tr>
<td>
**Epoch: 200**
</td>
<td>
cost = 0.0445636924368
</td>
<td>
Accuracy: 0.954545454545
</td>
</tr>
<tr>
<td>
**Epoch: 300**
</td>
<td>
cost = 0.0343226737879
</td>
<td>
Accuracy: 0.969696969697
</td>
</tr>
</table>
Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.
### 1.4 - Examining test set performance
```
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
```
**Expected Output**:
<table>
<tr>
<td>
**Train set accuracy**
</td>
<td>
97.7
</td>
</tr>
<tr>
<td>
**Test set accuracy**
</td>
<td>
85.7
</td>
</tr>
</table>
Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.
In the training set, the algorithm saw the sentence "*I love you*" with the label ❤️. You can check however that the word "adore" does not appear in the training set. Nonetheless, lets see what happens if you write "*I adore you*."
```
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
```
Amazing! Because *adore* has a similar embedding as *love*, the algorithm has generalized correctly even to a word it has never seen before. Words such as *heart*, *dear*, *beloved* or *adore* have embedding vectors similar to *love*, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
```
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
```
<font color='blue'>
**What you should remember from this part**:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as *"This movie is not good and not enjoyable"* because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.
## 2 - Emojifier-V2: Using LSTMs in Keras:
Let's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.
Run the following cell to load the Keras packages.
```
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
```
### 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement:
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> **Figure 3**: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>
### 2.2 Keras and mini-batching
In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.
The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.
### 2.3 - The Embedding layer
In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an [Embedding()](https://keras.io/layers/embeddings/) layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer.
The `Embedding()` layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.
<img src="images/embedding1.png" style="width:700px;height:250px;">
<caption><center> **Figure 4**: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of `max_len=5`. The final dimension of the representation is `(2,max_len,50)` because the word embeddings we are using are 50 dimensional. </center></caption>
The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).
The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.
**Exercise**: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
```
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = X[i].lower().split()
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j + 1
### END CODE HERE ###
return X_indices
```
Run the following cell to check what `sentences_to_indices()` does, and check your results.
```
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices)
```
**Expected Output**:
<table>
<tr>
<td>
**X1 =**
</td>
<td>
['funny lol' 'lets play football' 'food is ready for you']
</td>
</tr>
<tr>
<td>
**X1_indices =**
</td>
<td>
[[ 155345. 225122. 0. 0. 0.] <br>
[ 220930. 286375. 151266. 0. 0.] <br>
[ 151204. 192973. 302254. 151349. 394475.]]
</td>
</tr>
</table>
Let's build the `Embedding()` layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of `sentences_to_indices()` to it as an input, and the `Embedding()` layer will return the word embeddings for a sentence.
**Exercise**: Implement `pretrained_embedding_layer()`. You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from `word_to_vec_map`.
3. Define Keras embedding layer. Use [Embedding()](https://keras.io/layers/embeddings/). Be sure to make this layer non-trainable, by setting `trainable = False` when calling `Embedding()`. If you were to set `trainable = True`, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix
```
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix = np.zeros((vocab_len, emb_dim))
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec_map[word]
# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False.
embedding_layer = Embedding(vocab_len, emb_dim, trainable=False)
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
```
**Expected Output**:
<table>
<tr>
<td>
**weights[0][1][3] =**
</td>
<td>
-0.3403
</td>
</tr>
</table>
## 2.3 Building the Emojifier-V2
Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> **Figure 3**: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>
**Exercise:** Implement `Emojify_V2()`, which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (`m`, `max_len`, ) defined by `input_shape`. It should output a softmax probability vector of shape (`m`, `C = 5`). You may need `Input(shape = ..., dtype = '...')`, [LSTM()](https://keras.io/layers/recurrent/#lstm), [Dropout()](https://keras.io/layers/core/#dropout), [Dense()](https://keras.io/layers/core/#dense), and [Activation()](https://keras.io/activations/).
```
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(input=sentence_indices, output=X)
### END CODE HERE ###
return model
```
Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose `max_len = 10`. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001\*50 = 20,000,050 non-trainable parameters.
```
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
```
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, `adam` optimizer and `['accuracy']` metrics:
```
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
```
It's time to train your model. Your Emojifier-V2 `model` takes as input an array of shape (`m`, `max_len`) and outputs probability vectors of shape (`m`, `number of classes`). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
```
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
```
Fit the Keras model on `X_train_indices` and `Y_train_oh`. We will use `epochs = 50` and `batch_size = 32`.
```
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
```
Your model should perform close to **100% accuracy** on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
```
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
```
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
```
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
```
Now you can try it on your own example. Write your own sentence below.
```
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
```
Previously, Emojify-V1 model did not correctly label "not feeling happy," but our implementation of Emojiy-V2 got it right. (Keras' outputs are slightly random each time, so you may not have obtained the same result.) The current model still isn't very robust at understanding negation (like "not happy") because the training set is small and so doesn't have a lot of examples of negation. But if the training set were larger, the LSTM model would be much better than the Emojify-V1 model at understanding such complex sentences.
### Congratulations!
You have completed this notebook! ❤️❤️❤️
<font color='blue'>
**What you should remember**:
- If you have an NLP task where the training set is small, using word embeddings can help your algorithm significantly. Word embeddings allow your model to work on words in the test set that may not even have appeared in your training set.
- Training sequence models in Keras (and in most other deep learning frameworks) requires a few important details:
- To use mini-batches, the sequences need to be padded so that all the examples in a mini-batch have the same length.
- An `Embedding()` layer can be initialized with pretrained values. These values can be either fixed or trained further on your dataset. If however your labeled dataset is small, it's usually not worth trying to train a large pre-trained set of embeddings.
- `LSTM()` has a flag called `return_sequences` to decide if you would like to return every hidden states or only the last one.
- You can use `Dropout()` right after `LSTM()` to regularize your network.
Congratulations on finishing this assignment and building an Emojifier. We hope you're happy with what you've accomplished in this notebook!
# 😀😀😀😀😀😀
## Acknowledgments
Thanks to Alison Darcy and the Woebot team for their advice on the creation of this assignment. Woebot is a chatbot friend that is ready to speak with you 24/7. As part of Woebot's technology, it uses word embeddings to understand the emotions of what you say. You can play with it by going to http://woebot.io
<img src="images/woebot.png" style="width:600px;height:300px;">
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
##Histogram
A histogram is an accurate representation of the distribution of numerical data.
Example:
Data $x = [1,2,3,4,2,3,4,3,4,4]$
Number of bins $= 4$
```
x = np.array([1,2,3,2,3,3])
bins = np.linspace(1,4,4)
freq = np.array([1,2,3])
no_samples = 10
plt.bar(bins,freq)
#PDF
plt.bar(bins,freq/(no_samples))
```
##<font color='red'>1.Create hist function to plot PDF.</font>
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def hist(x,no_bins):
xmin=min(x)
xmax=max(x)
binsi=np.linspace(xmin,xmax,no_bins)
freq=np.zeros(no_bins)
for y in x:
for i in range(0,no_bins-1):
if y>binsi[i] and y<=binsi[i+1]:
freq[i]=freq[i]+1
break
plt.figure()
sz=np.size(x)
print(np.sum(freq))
factor=sz*(binsi[1]-binsi[0])
plt.bar(binsi,freq/factor)
return binsi,freq/factor
import numpy as np
x= np.random.normal(0,1,1000)
bins=20
[bins,freq]=hist(x,bins)
```
#### Gaussian random variable
The probability density of the normal distribution is
$$f(x|\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{(x-\mu)^2}{2\sigma^2}}$$
where,
$\mu$ is the mean or expectation of the distribution (and also its median and mode),
$\sigma$ is the standard deviation, and
$\sigma ^{2}$ is the variance.
range of variable is approximately $\mu-3\sigma$ to $\mu+3\sigma$
##<font color='red'>2. Plot Gaussian PDF using above formula with $\mu=0$ and $\sigma=1$.</font>
```
mu = 0
sigma = 1
x1 = np.linspace(-3,3,100)
var = np.exp(-1*(x1**2)/2)/np.sqrt(2*np.pi)
plt.plot(x1,var)
plt.show()
```
###Draw random samples from a normal (Gaussian) distribution.
numpy.random.normal($\mu$, $\sigma$, number of samples)
https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.normal.html
##<font color='red'>3. Genarate gaussian random variable (M=1000 samples) with $\mu=0$ and $\sigma=1$. Plot the PDF using hist function and compare with PDF generated using above formula. Repeat for $\mu=4$ and $\sigma=4$. </font>
```
mu = 0
sigma = 1
M = 1000
no_bins = 30
x = np.random.normal(mu,sigma,M)
[bins,freq]=hist(x,no_bins)
x1 = np.linspace(-3,3,100)
var = np.exp(-1*(x1**2)/2)/np.sqrt(2*np.pi)
plt.plot(x1,var,'r-*')
#plt.plot(x1,mlab.normpdf(x1, mu, sigma),'r-*')
plt.show()
```
##<font color='red'>3. Plot CDF for above random variables. </font>
```
#CDF
def CDF(bins,freq):
cdf = np.zeros(bins.shape)
#write code here
sz=np.size(freq)
cdf[0]=freq[0]
for vr in range (1,sz):
cdf[vr]=freq[vr]+cdf[vr-1]
cdf=cdf*(bins[1]-bins[0])
plt.figure()
plt.plot(bins,cdf)
plt.show()
mu = 0
sigma = 1
M = 1000
no_bins = 30
x = np.random.normal(mu,sigma,M)
[bins,freq]=hist(x,no_bins)
CDF(bins,freq)
```
##<font color='red'>4. For a random variable $X$ following normal distribution, show the PDF for a transformed random variable $X^2$. Find the mean and compare it with true mean. Use different values of M. </font>
```
mu = 0
sigma = 1
M = 1000
no_bins = 30
x = np.random.normal(mu,sigma,M) #write code here
x2 = np.zeros(shape=(M))
x2=x**2
#write code here
[bins,freq]=hist(x,no_bins)
[bins,freq]=hist(x2,no_bins)
print("Mean of x is %f"%(x.mean()))
print("Mean of x^2 is %f"%(x2.mean()))
```
## Uniform random variable
The probability density of the uniform distribution is
$$f(x|a,b)=\begin{cases}
\frac{1}{b-a},& \text{if } a \leq x\leq b\\
0, & \text{otherwise}
\end{cases}$$
##Draw samples from a uniform distribution.
numpy.random.uniform(a,b,number of samples)
https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.uniform.html
##<font color='red'>5. Genarate Uniform random variable with $a= 1$ and $b=2$. Plot the PDF and CDF.<font>
```
a=1
b=2
M = 1000
no_bins = 30
x_uniform = np.random.uniform(a,b,M)#write code here
[bins,freq]=hist(x_uniform,no_bins)
#CDF
CDF(bins,freq)
print("Mean of x_uniform is %f"%(x_uniform.mean()))
print("Variance of x_uniform is %f"%(x_uniform.var()))
```
##<font color='red'>6. Consider the transformed random variable Y=pX+q, where X is uniform random variable between 1 and 2. Take p=4 and q=5 and plot PDF and CDF for Y. Also find the mean and variance of Y.</font>
```
a=1
b=2
M = 1000
no_bins = 30
x_uniform = np.random.uniform(a,b,M)
p = 4
q = 5
y = x_uniform*p +q #write code here
[bins,freq]=hist(y,no_bins)
CDF(bins,freq)
print("Mean of y is %f"%(y.mean()))
print("Variance of y is %f"%(y.var()))
```
| github_jupyter |
### Tuple
* Coma seperated sequence of numbers or any characters with or without parenthesis.
```
T1 = 4, 5, 6 # Without parenthesis
print(T1)
T2=(55,66,77) # With paranthesis
print(T2)
T3='a',3,'b','c' # Mixed Data Type string and integer
T3
```
We can see, result will be always with parenthesis.
```
astr='a',3,'b','c'
astr
w=('7','a')
#checking type
type(w) # return the type of data
r1 = ((4, 5, 6), (7, 8))
r1
l1=[4, 0, 2] # this is list> coma seperated items inside square bracket
l1
az=tuple(l1)
az
az[0]
az[0]=99 ## Immutability
tup = tuple('Elephant')
tup
tup[0]
tup = tuple(['fast', [1, 2], True])
tup
tup[2]
tup[2] = False
tup[2].append(2)
tup
tup[1].append(3)
tup
tup[0].append(3)
tup
x1=(4, None, 'foo')
x2=(6, 0)
x3=('bar',)
XX=x1+x2+x3
x1
XX
(4, None, 'foo') + (6, 0) + ('bar',) ## Tuple concatenation
```
#### Unpacking tuples
```
tup = (4, 5, 6)
a, b, c = tup
b
seq = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
for i in seq:
print(i[0])
#print(i)
seq = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
for a,b,c in seq:
print('a={0}, b={1}, c={2}'.format(a, b, c))
values = 1, 2, 3, 4, 5
a, b, *remain = values
a, b
remain # remaining member after a and b, return as list assigned in variable rest
```
#### Tuple methods
```
a = (1, 2, 2, 2, 3, 4, 2)
a.count(2)
sum(a)
len(a)
```
### List
```
a1 = [2, 3, 7, 'None']
a1
tup = ('full', 'half', 'quarter')
b = list(tup)
b
b[0] = "can't handle" ## List is mutable
b
range(10)
g = range(10)
g
list(g)
```
### Adding and removing elements
```
b.append("Teacher's")
b
b.insert(1, 'red label')
b
b.pop(2)
b
b.append('like')
b
b.remove('like')
b
'red label' in b
'like' not in b_list
```
### Concatenating and combining lists
```
[4, None, 'foo'] + [7, 8, (2, 3)]
x = [4, None, 'ful']
x.extend([7, 8, (2, 3)]) ## Same as adding
x
```
#### Sorting
```
a = [7, 2, 5, 1, 3]
a.sort()
a
b = ['saw', 'small', 'He', 'foxes', 'six']
b.sort()# sort in alphabetical order
b
b = ['saw', 'small', 'He', 'foxes', 'six']
b.sort(key=len) ## Sort according to lenth of each elements
b
```
#### Slicing
```
set1 = [7, 2, 3, 7, 5, 6, 0, 1]
set1[1:5]
set1[3:5] = [6,3]
set1
#set1[3:4] = [6,3]
#set1 ### What is happening check yourself?
set1[:5]
set1[3:]
set1[-4:]
set1[-6:-2]
set1
set1[::2] ## Be careful !! What is happening here?
set1[::-1]## such beutiful code! Wow!!
```
### zip
```
set1 = ['foo', 'bar', 'baz']
set2 = ['one', 'two', 'three']
zipped = zip(set1, set2)
list(zipped)
zipped
```
#### reversed
```
list(range(10))
list(reversed(range(10)))
```
### dict
```
#empty_dict = {}
d1 = {'a' : 'some value', 'b' : [1, 2, 3, 4],'c':999}
d1
d1['b']
d1['a']
d1[7] = 'an integer'
d1
d1
d1['b']
'b' in d1
d1[5] = 'some value'
d1
d1['dummy'] = 'another value'
d1
del d1[7]
d1
ret = d1.pop('dummy')
ret ## It gives the member which is removed
#d1
d1
ret
d1.keys()
d1.values()
list(d1.keys())
list(d1.values())
```
#### Anonymous (Lambda) Functions
def sqr(x):
return x ** 2
##### sqr = lambda x: x **2
```
f=lambda x:x**2
f(9)
```
#### Thanks! This much for this module.
#### Don't forget to follow me for more such stuff.
* https://www.linkedin.com/in/ramendra-kumar-57334478/
* https://github.com/Rami-RK
#### Feel free to share.
| github_jupyter |
# Lab 3: Operators
An overview of operator properties
```
import matplotlib.pyplot as plt
from numpy import sqrt,cos,sin,arange,pi
from qutip import *
%matplotlib inline
H = Qobj([[1],[0]])
V = Qobj([[0],[1]])
P45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]])
M45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])
R = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])
L = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])
```
## Example 1: the outer product and the projection operator
We already have the $|H\rangle$ state represented as a vector in the HV basis, so the $\hat{P}_H$ operator is the outer product $|H\rangle\langle H|$ (a ket then a bra):
```
H
Ph = H*H.dag()
Ph
```
Same with the $\hat{P}_V$ operator:
```
Pv = V*V.dag()
Pv
```
## Example 2: Verify Eq. 4.38 for the HV basis states. Repeat for the ±45, and LR basis
```
identity(2)
Ph + Pv == identity(2)
P45*P45.dag()
M45*M45.dag()
P45*P45.dag() + M45*M45.dag()
L*L.dag()
R*R.dag()
L*L.dag() + R*R.dag()
```
## Example 3: Represent the $\hat{R}_p(\theta)$ operator in the HV basis and verify your representation by operating on $|H\rangle$ and $|V\rangle$ states. Use the following template function definition.
```
def Rp(theta):
return Qobj([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]).tidyup()
Rp(pi/2)
V==Rp(pi/2)*H
# Solution Goes Here
```
## 1) Using the $\hat{R}_p(\theta)$ operator, verify the operator properties described in Sections 4.1 and 4.2. Specifically, verify Eqns. 4.6, 4.7, 4.16, 4.18, 4.22, and 4.27
```
# Solution Goes Here
```
## Example: the similarity transform
The following defines a function that creates a similarity transform matrix. It takes the two old basis vectors and the two new basis vectors as arguments. To apply the transform, simply multiply the matrix onto the state vector or ooperator matrix. Following the examples below, explore this transform.
```
def sim_transform(o_basis1, o_basis2, n_basis1, n_basis2):
a = n_basis1.dag()*o_basis1
b = n_basis1.dag()*o_basis2
c = n_basis2.dag()*o_basis1
d = n_basis2.dag()*o_basis2
return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]])
```
We can define a similarity transform that converts from $HV\rightarrow \pm 45$
```
Shv45 = sim_transform(H,V,P45,M45) # as found in Example 4.A.1, Eq. 4.A.10.
Shv45
Shv45 * H # compare to Eq. 4.A.12
```
## 4) Use the similarity transform to represent $|V\rangle$ in the ±45 basis
## 5) Represent $\hat{P}_H$ in the ±45 basis.
Check your answer against Eqns. 4.A.17 and 4.72
## 6) Represent $\hat{P}_V$ in the ±45 basis.
| github_jupyter |
# Post-proc simulation results
Load hdf files, prepare them for post-processing.
Outputs from simulations (hdf files) provide modal displacements values. A conversion to "real" displacements
is required.
```
# Reload automatically all python modules before each cell exec
%load_ext autoreload
%autoreload 2
# standard python packages
import sys
import time
import os
from model_tools import load_model, load_convert_and_save
# visu
import matplotlib.pyplot as plt
import scipy.io
import h5py
from simulation_campaigns import transfer_data, get_job_duration
%matplotlib inline
import numpy as np
```
## Get list of remote files
and transfer if required
* Creates remote_data dictionnary which contains list of files to post-process
* Creates transfer.sh file to be executed to transfer (scp) files (h5) from remote server
```
import pickle
import subprocess
#pkl_file = open('campaign_new_0612.pkl', 'rb')
pkl_file = open('campaign_2018.pkl', 'rb')
remote_data = pickle.load(pkl_file)
transfer_data(remote_data)
# execute file transfer.sh to get h5 files
```
## Create 'converted' files
```
# List available campaigns
results_path = remote_data['results_paths'][1]
for name in remote_data:
if name.find('one_contact') >= 0:
campaign = remote_data[name]
print(name)
for freq in campaign:
restit = campaign[freq][3]
file = os.path.join(results_path, campaign[freq][4])
convfile = os.path.join(results_path, campaign[freq][5])
cmd = 'rm -rf ' + os.path.dirname(file)
print(cmd)
# Set path to matlab inputs
#matlab_bass = './bass_guitar/pb2'
#matlab_fretless = './fretless_bass_guitar/bsf'
results_path = remote_data['results_paths'][1]
for name in remote_data:
campaign = remote_data[name]
#if name.find('bass') >= 0:
if name.find('results_path') < 0:
for freq in campaign:
restit = campaign[freq][3]
file = os.path.join(results_path, campaign[freq][4])
convfile = os.path.join(results_path, campaign[freq][5])
if(not os.path.exists(convfile)):
print(file, convfile)
#print(name)
#load_convert_and_save(file)
#elif name.find('fretless') >= 0:
# for freq in campaign:
# restit = campaign[freq][3]
# file = os.path.join(results_path, campaign[freq][4])
# convfile = os.path.join(results_path, campaign[freq][5])
# if(not os.path.exists(convfile)):
# load_convert_and_save(file)
fname = './results_bass_2018/F_2048000_id_4955029/bass_e0.0_862_2048000.h5'
load_convert_and_save(fname)
load_model(fname)
matlab_bass = './bass_guitar/pb2'
matlab_fretless = './fretless_bass_guitar/bsf'
file = './Results_new_bass_0612/F_32768000_id_3882603/converted_g_862_32768000.h5'
h5file = h5py.File(file, 'r+')
print(h5file.attrs['restit'])
h5file.close()
# Set path to matlab inputs
for name in remote_data:
campaign = remote_data[name]
if name.find('results_paths') < 0:
for freq in campaign:
restit = campaign[freq][3]
file = os.path.join(results_path, campaign[freq][4])
convfile = os.path.join(results_path, campaign[freq][5])
if os.path.exists(convfile):
print(convfile)
h5file = h5py.File(convfile, 'r+')
h5file.attrs['restit'] = restit
print(h5file.attrs['restit'])
h5file.close()
# Check frets output parameter
for name in campaign_bass:
filelist = campaign_bass[name]['files']
for i in range(len(filelist)):
if os.path.exists(filelist[i]):
print(filelist[i])
h5file = h5py.File(filelist[i], 'r+')
h5file.attrs['frets output'] = 'light'
print(h5file.attrs['frets output'])
h5file.close()
filename = './results_bass_1812/F_10000.0_id_4073973/single_e0.0_999_10000.h5'
filename.replace('999', '862')
import subprocess
def rename(filename):
#for file in files:
dirname = os.path.dirname(file)
currentname = 'single' + os.path.basename(filename).split('one_contact')[1]
current = os.path.join(dirname, currentname)
cmd = 'cp ' + current + ' ' + filename
if not os.path.exists(filename):
subprocess.call(cmd)
return cmd
files = remote_data['one_contact0.0']
files
names = [name for name in remote_data.keys() if name.find('one_contact') >=0]
for name in names:
campaign = remote_data[name]
#if name.find('bass') >= 0:
for freq in campaign:
file = os.path.join(results_path, campaign[freq][4])
convfile = os.path.join(results_path, campaign[freq][5])
#if(not os.path.exists(convfile)):
# load_convert_and_save(file)
print(rename(file))
```
| github_jupyter |
```
from IPython.display import Math, HTML
display(HTML("<script src='https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/"
"latest.js?config=default'></script>"))
from __future__ import print_function
import torch
```
# Torch Tensors
## Matrices and Tensors
A tensor is often thought of as a generalized matrix. That is, it could be a 1-D matrix (a vector is actually such a tensor), a 3-D matrix (something like a cube of numbers), even a 0-D matrix (a single number), or a higher dimensional structure that is harder to visualize. The dimension of the tensor is called its rank.
### Initialize a Tensor
```
#@title Creating Tensors { vertical-output: true }
# Construct tensors
# empty tensor
x = torch.empty(5, 3)
print(f'{x}\n')
# tensor with random values
x = torch.rand(5, 3)
print(f'{x}\n')
# tensor with zeros
x = torch.zeros(5, 3, dtype=torch.long)
print(f'{x}\n')
# specific tensor
x = torch.tensor([[5.5, 3, 6.2],[2.2, 9.3, 32.2]])
print(f'{x}\n')
# tensor with zeros and same size as x
y = torch.zeros_like(x, dtype=torch.float)
print(f'{y}\n')
# tensor with ones, same size as x and int32 data type
y = torch.ones_like(x, dtype=torch.int32)
print(f'{y}\n')
# tensor with random values, same size as x and float data type
y = torch.randn_like(x, dtype=torch.float)
print(f'{y}\n')
# tensor with values from the interval [0, 10) and int8 data type
y = torch.arange(0, 10, dtype=torch.int8)
print(f'{y}\n')
# tensor with values from the interval [0, 20) with step 2 and int8 data type
y = torch.arange(0, 20, 2, dtype=torch.int8)
print(f'{y}\n')
# get value
x = torch.tensor([[1]])
print(f'{x.item()}\n')
y = torch.zeros_like(torch.empty(200), dtype=torch.float)
print(f'{y}\n')
```
### Change Tensor's Shape
```
#@title Reshaping Tensors { vertical-output: true }
x = torch.randn(4, 4)
y = x.view(16)
print(x.size(), y.size())
z = y.view(-1, 8)
print(f'{z.size()}\n')
r = torch.arange(0, 20, dtype=torch.int16)
print(f'Initial tensor: {r}\n')
# 2 rows, 10 columns
k = r.reshape(2, 10)
print(f'{k}\n')
# 5 columns
k = r.view(-1, 5)
print(f'{k}\n')
# 2 rows
k = r.view(2, -1)
print(f'{k}\n')
# 1
k = r.view(-1)
print(f'{k}\n')
```
### Operations with Tensors
```
#@title Operations with Tensors { vertical-output: true }
x = torch.ones(2, 2, dtype=torch.int32)
y = torch.tensor([[2, 2], [2, 2]], dtype=torch.int32)
print(x, y)
z = x + y
print(f'Sum: {z}\n')
z = x * y
print(f'Product: {z}\n')
z = x - y
print(f'Difference: {z}\n')
r = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(f'r: {r}\n')
tr = torch.transpose(r, 1, 0)
print(f'Transposed r: {tr}')
r = torch.randn_like(y, dtype=float)
print(f'r: {r}\n')
z = torch.mean(r)
print(f'Mean of {r}: {z}\n')
print(f'Size: {r.size()}')
print(f'Number of elements: {torch.numel(r)}')
y.add_(torch.ones_like(y, dtype=torch.int32)*10)
y
```
## NumPy Bridge
Using `.numpy()`, we can convert a Torch Tensor into a NumPy Array.
This is a pass-by-reference call, so changes in the tensor will also affect the array
```
# Torch tensor to numpy array
z = torch.ones(2, 2)
print(z)
k = z.numpy() # pass-by-reference! point to same object
print(k)
k[0][0]+=1
print(z)
print(k)
```
Similarly, use `torch.from_numpy` to convert an NumPy Array into a Torch Tensor. Also a pass-by-reference call!
```
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
```
# Autograd: Automatic Differentiation
The autograd package provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.
A tensor can be created with `requires_grad=True` so that torch.autograd records operations on them for automatic differentiation. When you finish your computation you can call `.backward()` and have all the gradients computed automatically. The gradient for this tensor will be accumulated into `.grad` attribute.
## Example
x = **1**
y = 2
z = 0
a = x + y
b = max(y, z)
f = a*b

```
x = torch.tensor([1.0], requires_grad=True)
y = torch.tensor([2.0], requires_grad=True)
z = torch.tensor([0.0], requires_grad=True)
a = x + y
b = torch.max(y, z)
f = a * b
# by default gradient is retained only for leaf nodes (i.e. x, y, z).
# we need to specify for the rest
a.retain_grad()
b.retain_grad()
f.retain_grad()
print(f'Tensors Value:\n\t x: {x}, y: {y}, z: {z}, a: {a}, b: {b}, f: {f}\n')
print(f'Gradients:\n\t x: {x.grad}, y: {y.grad}, z: {z.grad}, a: {a.grad}, b: {b.grad}, f: {f.grad}')
```
## Backward Pass
will compute the gradients (i.e. x.grad = d(out)/dx). We can simply compute it by callon `.backward()`

```
f.backward()
print(f'Gradients:\n\t x: {x.grad}, y: {y.grad}, z: {z.grad}, a: {a.grad}, b: {b.grad}, f: {f.grad}')
```
## But what happens inside ?
Let's say *f_chain* is a class that calculates our Example. In the forward path it will return *f = ab* . In the backward path it will calculate the gradients and will return the gradient of the output over the input (i.e. *dfdx, dfdy, dfdz*).
In order to calculate the backward path, for each node we need its local gradients (i.e. in node *a* we need *dfda* ).
The downstream gradient in each node is computed as:
### *downstream grad = upstream grad * local grad*
```
class f_chain(object):
# constructor
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
# calculate the forward path
def forward(self):
self.a = self.x + self.y
self.b = torch.max(y, self.z)
self.f = self.a * self.b
return f
# calculate backward path
def backward(self):
# local gradients
local_dadx = 1
local_dady = 1
local_dbdy = 1 if self.y > self.z else 0
local_dbdz = 1 if self.z > self.y else 0
local_dfda = self.b
local_dfdb = self.a
# downstream = upstream * local
self.dfdf = torch.tensor([1], dtype=float)
self.dfda = self.dfdf * local_dfda
self.dfdb = self.dfdf * local_dfdb
self.dfdx = self.dfda * local_dadx
dfdy_1 = self.dfda * local_dady
self.dfdz = self.dfdb * local_dbdz
dfdy_2 = self.dfdb * local_dbdy
self.dfdy = dfdy_1 + dfdy_2
return self.dfdx, self.dfdy, self.dfdz
def get_tensors(self): return self.x, self.y, self.z, self.a, self.b, self.f
def get_gradients(self): return self.dfdx, self.dfdy, self.dfdz, self.dfda, self.dfdb, self.dfdf
# initialize leaf tensors, and f_chain
x = torch.tensor([1.0])
y = torch.tensor([2.0])
z = torch.tensor([0.0])
fc = f_chain(x, y, z)
# forward path
fc.forward()
(x, y, z, a, b, f) = fc.get_tensors()
print(f'Tensors Value:\n\t x: {x}, y: {y}, z: {z}, a: {a}, b: {b}, f: {f}\n')
# backward path
fc.backward()
(x_grad, y_grad, z_grad, a_grad, b_grad, f_grad) = fc.get_gradients()
print(f'Gradients:\n\t x: {x_grad}, y: {y_grad}, z: {z_grad}, a: {a_grad}, b: {b_grad}, f: {f_grad}')
```
# Jacobian Matrix
```
# ignore
Math(r'\huge \textbf{Example:} \\s = u^Th \\ h = f(z) \\ z = Wx + b \\ x: input\ Tensor')
```

```
def s(x):
W = torch.tensor([[2., 3.], [4., 1.]], requires_grad=True)
b = torch.ones_like(x, requires_grad=True)
u = torch.tensor([[6., 8.], [2., 3.]], requires_grad=True)
uT = torch.transpose(u, 1, 0)
z = W*x + b
h = 2 * z
out = uT*h
return out
```
## Useful derivatives
```
Math(r"\huge \textbf{Useful Derivatives:} \\~\\ \frac{\partial (u^Th)}{\partial u} = h^T \\~\\ \frac{\partial (f(z))}{\partial z} = diag(f'(z)) \\~\\ \frac{\partial (Wx+b)}{\partial b} = I ")
Math(r"\huge \textbf{For Instance we can calculate:} \\~\\ \frac{\partial s}{\partial b} = \frac{\partial s}{\partial h} \frac{\partial h}{\partial z} \frac{\partial z}{\partial b} = u^T diag(f'(z)) I ")
x = torch.tensor([[1., 2.], [3., 4.]], dtype=float, requires_grad=True)
jac = torch.autograd.functional.jacobian(s, x)
print(f'Jacobian Matrix:\n\t {jac}')
```
| github_jupyter |
```
#default_exp model
```
# Base Model
> This class contains the base which is used to train data upon.
```
# hide
%load_ext autoreload
%autoreload 2
%matplotlib inline
# export
from dataclasses import dataclass
from datetime import datetime
from typing import Callable, List, Optional, Tuple
import pandas as pd
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torchlife.models.ph import PieceWiseHazard
from torchlife.models.cox import ProportionalHazard
from torchlife.models.aft import AFTModel
from torchlife.data import create_dl, create_test_dl, get_breakpoints
from torchlife.losses import aft_loss, hazard_loss, Loss, HazardLoss, AFTLoss
# hide
import matplotlib.pyplot as plt
%matplotlib inline
```
## General Model
```
# export
class GeneralModel(pl.LightningModule):
def __init__(
self,
base: nn.Module,
loss_fn: Loss,
lr: float = 1e-3,
) -> None:
"""
n_features: The number of real valued feature.
n_layers: Number of Deep learning layers.
loss_fn: Loss function.
n_cats: List of integers containing the number of unique values per category.
"""
super().__init__()
self.base = base
self.loss_fn = loss_fn
self.lr = lr
def forward(self, x):
return self.base(*x)
def common_step(
self, batch: Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
) -> Tuple[torch.Tensor, torch.Tensor, torch.LongTensor]:
x, e = batch
density_term, cumulative_term = self(x)
loss = self.loss_fn(e.squeeze(), density_term.squeeze(), cumulative_term.squeeze())
if torch.isnan(loss):
breakpoint()
return loss
def training_step(self, batch, *args):
loss = self.common_step(batch)
self.log("training_loss", loss, on_step=True, on_epoch=True)
def validation_step(self, batch, *args):
loss = self.common_step(batch)
self.log("val_loss", loss, on_step=True, on_epoch=True)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.lr)
return {
"optimizer": optimizer,
"lr_scheduler": ReduceLROnPlateau(optimizer, patience=2),
"monitor": "val_loss"
}
# export
def train_model(model, train_dl, valid_dl, epochs):
checkpoint_callback = ModelCheckpoint(
monitor="val_loss",
dirpath="./models/",
filename= "model-{epoch:02d}-{val_loss:.2f}",
save_last=True,
)
early_stopping = EarlyStopping("val_loss")
trainer = pl.Trainer(
max_epochs=epochs,
gpus=torch.cuda.device_count(),
callbacks=[early_stopping, checkpoint_callback]
)
trainer.fit(model, train_dl, valid_dl)
```
## Data for Demo
```
# hide
import pandas as pd
import numpy as np
url = "https://raw.githubusercontent.com/CamDavidsonPilon/lifelines/master/lifelines/datasets/rossi.csv"
df = pd.read_csv(url)
df.rename(columns={'week':'t', 'arrest':'e'}, inplace=True)
print(df.shape)
df.head()
```
## Hazard Model
```
# export
_text2model_ = {
'ph': PieceWiseHazard,
'cox': ProportionalHazard
}
class ModelHazard:
"""
Modelling instantaneous hazard (λ).
parameters:
- model(str): ['ph'|'cox'] which maps to Piecewise Hazard, Cox Proportional Hazard.
- percentiles: list of time percentiles at which time should be broken
- h: list of hidden units (disregarding input units)
- bs: batch size
- epochs: epochs
- lr: learning rate
- beta: l2 penalty on weights
"""
def __init__(self, model:str, percentiles=[20, 40, 60, 80], h:tuple=(),
bs:int=128, epochs:int=20, lr:float=1.0, beta:float=0):
self.base_model = _text2model_[model]
self.percentiles = percentiles
self.loss_fn = HazardLoss()
self.h = h
self.bs, self.epochs, self.lr, self.beta = bs, epochs, lr, beta
def fit(self, df):
breakpoints = get_breakpoints(df, self.percentiles)
train_dl, valid_dl, t_scaler, x_scaler = create_dl(df, breakpoints)
dim = df.shape[1] - 2
assert dim > 0, ValueError("dimensions of x input needs to be >0. Choose ph instead")
model_args = {
'breakpoints': breakpoints,
't_scaler': t_scaler,
'x_scaler': x_scaler,
'h': self.h,
'dim': dim
}
self.model = GeneralModel(
self.base_model(**model_args),
self.loss_fn,
self.lr
)
self.breakpoints = breakpoints
self.t_scaler = t_scaler
self.x_scaler = x_scaler
train_model(self.model, train_dl, valid_dl, self.epochs)
def predict(self, df):
test_dl = create_test_dl(df, self.breakpoints, self.t_scaler, self.x_scaler)
with torch.no_grad():
self.model.eval()
λ, S = [], []
for x in test_dl:
preds = self.model(x)
λ.append(torch.exp(preds[0]))
S.append(torch.exp(-preds[1]))
return torch.cat(λ), torch.cat(S)
def plot_survival_function(self, *args):
self.model.base.plot_survival_function(*args)
```
## Cox Model Demo
```
model = ModelHazard('cox')
model.fit(df)
# %reload_ext tensorboard
# %tensorboard --logdir ./lightning_logs/
λ, Λ = model.predict(df)
df.shape, λ.shape, Λ.shape
```
## Modelling Distribution with [AFT](./AFT_models) models
```
# export
from torchlife.models.error_dist import *
class ModelAFT:
"""
Modelling error distribution given inputs x.
parameters:
- dist(str): Univariate distribution of error
- h: list of hidden units (disregarding input units)
- bs: batch size
- epochs: epochs
- lr: learning rate
- beta: l2 penalty on weights
"""
def __init__(self, dist:str, h:tuple=(),
bs:int=128, epochs:int=20, lr:float=0.1, beta:float=0):
self.dist = dist
self.loss_fn = AFTLoss()
self.h = h
self.bs, self.epochs, self.lr, self.beta = bs, epochs, lr, beta
def fit(self, df):
train_dl, valid_dl, self.t_scaler, self.x_scaler = create_dl(df)
dim = df.shape[1] - 2
aft_model = AFTModel(self.dist, dim, self.h)
self.model = GeneralModel(
aft_model,
self.loss_fn,
self.lr
)
train_model(self.model, train_dl, valid_dl, self.epochs)
def predict(self, df):
"""
Predicts the survival probability
"""
test_dl = create_test_dl(df)
with torch.no_grad():
self.model.eval()
Λ = []
for x in test_dl:
_, logΛ = self.model(x)
Λ.append(torch.exp(logΛ))
return torch.cat(Λ).cpu().numpy()
def predict_time(self, df):
"""
Predicts the mode (not average) time expected for instance.
"""
if "t" not in df.columns:
df["t"] = 0
test_dl = create_test_dl(df)
with torch.no_grad():
self.model.eval()
μ = []
for _, x in test_dl:
logμ, _ = self.model.base.get_mode_time(x)
μ.append(torch.exp(logμ))
return self.t_scaler.inverse_transform(torch.cat(μ).cpu().numpy())
def plot_survival(self, t, x):
self.model.plot_survival_function(t, self.t_scaler, x, self.x_scaler)
model = ModelAFT('Gumbel')
model.fit(df)
surv_prob = model.predict(df)
mode_time = model.predict_time(df)
df["surv_prob"] = surv_prob
df["mode_time"] = mode_time
df
plt.hist(df[df["e"] == 1]["surv_prob"].values, bins=30, alpha=0.5, density=True, label="death")
plt.hist(df[df["e"] == 0]["surv_prob"].values, bins=30, alpha=0.5, density=True, label="censored")
plt.legend()
plt.show()
# %reload_ext tensorboard
# %tensorboard --logdir ./lightning_logs/
# hide
from nbdev.export import *
notebook2script()
```
| github_jupyter |
```
import uuid
import json
from time import gmtime, strftime
import boto3
import sagemaker
from sagemaker.session import Session
from sagemaker.feature_store.feature_group import FeatureGroup
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
boto_session = boto3.Session(region_name=region)
account_id = boto3.client('sts').get_caller_identity().get('Account')
suffix=uuid.uuid1().hex # to be used in resource names
pwd
cd src
!sed -i "s|##REGION##|{region}|g" Dockerfile
!cat Dockerfile
```
Build a container image from the Dockerfile
```
!pip install -q sagemaker-studio-image-build
!sm-docker build . --repository medical-image-processing-smstudio:1.0
```
Define the input and output data location. Please insert your bucket names to `input_data_bucket` and `output_data_bucket`.
```
input_data_bucket='<your-s3-bucket-name>'
input_data_prefix='nsclc_radiogenomics'
input_data_uri='s3://%s/%s' % (input_data_bucket, input_data_prefix)
print(input_data_uri)
output_data_bucket='<your-s3-bucket-name>'
output_data_prefix='nsclc_radiogenomics'
output_data_uri='s3://%s/%s' % (output_data_bucket, output_data_prefix)
print(output_data_uri)
```
Be sure to use the image and tag name defined in `!sm-docker build` command. We will be replacing the placeholders in the Stepfunctions state machine definition json file with your bucket and image uri.
```
ecr_image_uri='%s.dkr.ecr.%s.amazonaws.com/medical-image-processing-smstudio:1.0' % (account_id, region)
!sed -i "s|##INPUT_DATA_S3URI##|{input_data_uri}|g" nsclc-radiogenomics-imaging-workflow.json
!sed -i "s|##OUTPUT_DATA_S3URI##|{output_data_uri}|g" nsclc-radiogenomics-imaging-workflow.json
!sed -i "s|##ECR_IMAGE_URI##|{ecr_image_uri}|g" nsclc-radiogenomics-imaging-workflow.json
!sed -i "s|##IAM_ROLE_ARN##|{role}|g" nsclc-radiogenomics-imaging-workflow.json
with open('nsclc-radiogenomics-imaging-workflow.json') as f:
state_machine_json = json.load(f)
```
We need to create an IAM execution role for the Stepfunctions workflow.
```
iam = boto3.client('iam')
my_managed_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"events:PutTargets",
"events:DescribeRule",
"events:PutRule"
],
"Resource": [
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule"
]
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": role,
"Condition": {
"StringEquals": {
"iam:PassedToService": "sagemaker.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"sagemaker:CreateEndpoint",
"sagemaker:CreateEndpointConfig",
"sagemaker:CreateHyperParameterTuningJob",
"sagemaker:CreateModel",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateTransformJob",
"sagemaker:DeleteEndpoint",
"sagemaker:DeleteEndpointConfig",
"sagemaker:DescribeHyperParameterTuningJob",
"sagemaker:DescribeProcessingJob",
"sagemaker:DescribeTrainingJob",
"sagemaker:DescribeTransformJob",
"sagemaker:ListProcessingJobs",
"sagemaker:ListTags",
"sagemaker:StopHyperParameterTuningJob",
"sagemaker:StopProcessingJob",
"sagemaker:StopTrainingJob",
"sagemaker:StopTransformJob",
"sagemaker:UpdateEndpoint",
],
"Resource": "*"
}
]
}
trust_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": ["states.amazonaws.com", "sagemaker.amazonaws.com"]},
"Action": "sts:AssumeRole"
}
]
}
policy_name = 'MyStepFunctionsWorkflowExecutionPolicy-%s' % suffix
role_name = 'MyStepFunctionsWorkflowExecutionRole-%s' % suffix
policy_response = iam.create_policy(
PolicyName=policy_name,
PolicyDocument=json.dumps(my_managed_policy)
)
role_response = iam.create_role(
RoleName=role_name,
AssumeRolePolicyDocument=json.dumps(trust_policy),
Description='Role to execute StepFunctions workflow which submits SageMaker jobs',
MaxSessionDuration=3600,
)
# Attach a policy to role
iam.attach_role_policy(
PolicyArn=policy_response['Policy']['Arn'],
RoleName=role_name
)
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/CloudWatchEventsFullAccess',
RoleName=role_name
)
```
Create a Stepfunctions workflow, i.e. a state machine.
```
sfn = boto3.client('stepfunctions')
sfn_execution_role = role_response['Role']['Arn']
state_machine_name = 'nsclc-radiogenomics-imaging-workflow-%s' % suffix
sfn_response = sfn.create_state_machine(name = state_machine_name,
definition = json.dumps(state_machine_json),
roleArn = sfn_execution_role,
type = 'STANDARD')
```
We will be running this workflow for all the `RO1` subjects.
```
subject_list = ['R01-%03d'%i for i in range(1,163)]
```
Execute!
```
stateMachineArn=sfn_response['stateMachineArn']
feature_store_name = 'imaging-feature-group-%s' % suffix
processing_job_name = 'dcm-nifti-conversion-%s' % suffix
offline_store_s3uri = '%s/multimodal-imaging-featurestore' % output_data_uri
payload = {
"PreprocessingJobName": processing_job_name,
"FeatureStoreName": feature_store_name,
"OfflineStoreS3Uri": offline_store_s3uri,
"Subject": subject_list
}
exeution_response = sfn.start_execution(stateMachineArn=stateMachineArn,
name=suffix,
input=json.dumps(payload))
print(exeution_response)
```
| github_jupyter |
# Finding Lane Lines on the Road - Jack Qian
```
from pyntcloud import PyntCloud
import numpy as np
import os
import time
import scipy.linalg
import matplotlib.pyplot as plt
```
define main code of image process
```
path_in = "/home/jackqian/avod/make_planes/"
path_kitti = "/home/jackqian/KITTI/training/velodyne/"
path_kitti_testing = "/home/jackqian/KITTI/testing/velodyne/"
path_save = "/media/jackqian/新加卷/Ubuntu/avod/make_planes/"
path_training_bin3 = "/media/jackqian/新加卷/Ubuntu/avod/make_planes/kittilidar_training_qyqmake/"
file1 = "000002.bin"
file2 = "1.bin"
```
Read in and grayscale the image
```
def cau_planes():
"""
using Ransac in PyntCloud to find the groud plane.
Note the lidar points have transformed to the camera coordinate.
:return: groud plane parameters (A, B, C, D) for Ax+By+Cz+D=0.
"""
last_time = time.time()
cloud = PyntCloud.from_file(path_save + "kittilidar_training_qyqmake/" + file1)
#print(cloud)
#cloud.plot()
cloud.points = cloud.points[cloud.points["y"] > 1]
# cloud.points = cloud.points[cloud.points["x"] > -2]
# cloud.points = cloud.points[cloud.points["x"] < 2]
# cloud.points = cloud.points[cloud.points["z"] > -20]
# cloud.points = cloud.points[cloud.points["z"] < 20]
is_floor = cloud.add_scalar_field("plane_fit", n_inliers_to_stop=len(cloud.points) / 10, max_dist=0.001, max_iterations=100)
cloud.plot(use_as_color=is_floor, cmap = "cool")
cloud.points = cloud.points[cloud.points[is_floor] > 0]
cloud.plot()
data = np.array(cloud.points)
mn = np.min(data, axis=0)
mx = np.max(data, axis=0)
X, Y = np.meshgrid(np.linspace(mn[0], mx[0], 20), np.linspace(mn[1], mx[1], 20))
XX = X.flatten()
YY = Y.flatten()
#### best-fit linear plane
#### Z = C[0] * X + C[1] * Y + C[2]
A = np.c_[data[:, 0], data[:, 1], np.ones(data.shape[0])]
C, _, _, _ = scipy.linalg.lstsq(A, data[:, 2]) # coefficients
Z = C[0] * X + C[1] * Y + C[2]
normal = np.array([C[0], C[1], 1, C[2]])
normal = - normal / normal[1]
print(normal)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, alpha=0.2)
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c='r', s=1)
# ax.scatter(data_raw[:, 0], data_raw[:, 1], data_raw[:, 2], c='g', s=5)
plt.xlabel('X')
plt.ylabel('Y')
ax.set_zlabel('Z')
ax.axis('equal')
ax.axis('tight')
plt.show()
cau_planes()
```
| github_jupyter |
# How many cases of COVID-19 does each U.S. state really have?
> Reported U.S. case counts are based on the number of administered tests. Since not everyone is tested, this number is biased. We use Bayesian techniques to estimate the true number of cases.
- author: Joseph Richards
- image: images/covid-state-case-estimation.png
- hide: false
- comments: true
- categories: [MCMC, US, states, cases]
- permalink: /covid-19-us-case-estimation/
- toc: false
> Note: This dashboard contains the results of a predictive model. The author has tried to make it as accurate as possible. But the COVID-19 situation is changing quickly, and these models inevitably include some level of speculation.
```
#hide
# Setup and imports
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import requests
from IPython.display import display, Markdown
#hide
# Data utilities:
def get_statewise_testing_data():
'''
Pull all statewise data required for model fitting and
prediction
Returns:
* df_out: DataFrame for model fitting where inclusion
requires testing data from 7 days ago
* df_pred: DataFrame for count prediction where inclusion
only requires testing data from today
'''
# Pull testing counts by state:
out = requests.get('https://covidtracking.com/api/states')
df_out = pd.DataFrame(out.json())
df_out.set_index('state', drop=True, inplace=True)
# Pull time-series of testing counts:
ts = requests.get('https://covidtracking.com/api/states/daily')
df_ts = pd.DataFrame(ts.json())
# Get data from last week
date_last_week = df_ts['date'].unique()[7]
df_ts_last_week = _get_test_counts(df_ts, df_out.index, date_last_week)
df_out['num_tests_7_days_ago'] = \
(df_ts_last_week['positive'] + df_ts_last_week['negative'])
df_out['num_pos_7_days_ago'] = df_ts_last_week['positive']
# Get data from today:
df_out['num_tests_today'] = (df_out['positive'] + df_out['negative'])
# State population:
df_pop = pd.read_excel(('https://github.com/jwrichar/COVID19-mortality/blob/'
'master/data/us_population_by_state_2019.xlsx?raw=true'),
skiprows=2, skipfooter=5)
r = requests.get(('https://raw.githubusercontent.com/jwrichar/COVID19-mortality/'
'master/data/us-state-name-abbr.json'))
state_name_abbr_lookup = r.json()
df_pop.index = df_pop['Geographic Area'].apply(
lambda x: str(x).replace('.', '')).map(state_name_abbr_lookup)
df_pop = df_pop.loc[df_pop.index.dropna()]
df_out['total_population'] = df_pop['Total Resident\nPopulation']
# Tests per million people, based on today's test coverage
df_out['tests_per_million'] = 1e6 * \
(df_out['num_tests_today']) / df_out['total_population']
df_out['tests_per_million_7_days_ago'] = 1e6 * \
(df_out['num_tests_7_days_ago']) / df_out['total_population']
# People per test:
df_out['people_per_test'] = 1e6 / df_out['tests_per_million']
df_out['people_per_test_7_days_ago'] = \
1e6 / df_out['tests_per_million_7_days_ago']
# Drop states with messed up / missing data:
# Drop states with missing total pop:
to_drop_idx = df_out.index[df_out['total_population'].isnull()]
print('Dropping %i/%i states due to lack of population data: %s' %
(len(to_drop_idx), len(df_out), ', '.join(to_drop_idx)))
df_out.drop(to_drop_idx, axis=0, inplace=True)
df_pred = df_out.copy(deep=True) # Prediction DataFrame
# Criteria for model fitting:
# Drop states with missing test count 7 days ago:
to_drop_idx = df_out.index[df_out['num_tests_7_days_ago'].isnull()]
print('Dropping %i/%i states due to lack of tests: %s' %
(len(to_drop_idx), len(df_out), ', '.join(to_drop_idx)))
df_out.drop(to_drop_idx, axis=0, inplace=True)
# Drop states with no cases 7 days ago:
to_drop_idx = df_out.index[df_out['num_pos_7_days_ago'] == 0]
print('Dropping %i/%i states due to lack of positive tests: %s' %
(len(to_drop_idx), len(df_out), ', '.join(to_drop_idx)))
df_out.drop(to_drop_idx, axis=0, inplace=True)
# Criteria for model prediction:
# Drop states with missing test count today:
to_drop_idx = df_pred.index[df_pred['num_tests_today'].isnull()]
print('Dropping %i/%i states in prediction data due to lack of tests: %s' %
(len(to_drop_idx), len(df_pred), ', '.join(to_drop_idx)))
df_pred.drop(to_drop_idx, axis=0, inplace=True)
# Cast counts to int
df_pred['negative'] = df_pred['negative'].astype(int)
df_pred['positive'] = df_pred['positive'].astype(int)
return df_out, df_pred
def _get_test_counts(df_ts, state_list, date):
ts_list = []
for state in state_list:
state_ts = df_ts.loc[df_ts['state'] == state]
# Back-fill any gaps to avoid crap data gaps
state_ts.fillna(method='bfill', inplace=True)
record = state_ts.loc[df_ts['date'] == date]
ts_list.append(record)
df_ts = pd.concat(ts_list, ignore_index=True)
return df_ts.set_index('state', drop=True)
#hide
# Model utilities
def case_count_model_us_states(df):
# Normalize inputs in a way that is sensible:
# People per test: normalize to South Korea
# assuming S.K. testing is "saturated"
ppt_sk = np.log10(51500000. / 250000)
df['people_per_test_normalized'] = (
np.log10(df['people_per_test_7_days_ago']) - ppt_sk)
n = len(df)
# For each country, let:
# c_obs = number of observed cases
c_obs = df['num_pos_7_days_ago'].values
# c_star = number of true cases
# d_obs = number of observed deaths
d_obs = df[['death', 'num_pos_7_days_ago']].min(axis=1).values
# people per test
people_per_test = df['people_per_test_normalized'].values
covid_case_count_model = pm.Model()
with covid_case_count_model:
# Priors:
mu_0 = pm.Beta('mu_0', alpha=1, beta=100, testval=0.01)
# sig_0 = pm.Uniform('sig_0', lower=0.0, upper=mu_0 * (1 - mu_0))
alpha = pm.Bound(pm.Normal, lower=0.0)(
'alpha', mu=8, sigma=3, shape=1)
beta = pm.Bound(pm.Normal, upper=0.0)(
'beta', mu=-1, sigma=1, shape=1)
# beta = pm.Normal('beta', mu=0, sigma=1, shape=3)
sigma = pm.HalfNormal('sigma', sigma=0.5, testval=0.1)
# sigma_1 = pm.HalfNormal('sigma_1', sigma=2, testval=0.1)
# Model probability of case under-reporting as logistic regression:
mu_model_logit = alpha + beta * people_per_test
tau_logit = pm.Normal('tau_logit',
mu=mu_model_logit,
sigma=sigma,
shape=n)
tau = np.exp(tau_logit) / (np.exp(tau_logit) + 1)
c_star = c_obs / tau
# Binomial likelihood:
d = pm.Binomial('d',
n=c_star,
p=mu_0,
observed=d_obs)
return covid_case_count_model
#hide
df, df_pred = get_statewise_testing_data()
# Initialize the model:
mod = case_count_model_us_states(df)
# Run MCMC sampler
with mod:
trace = pm.sample(500, tune=500, chains=1)
#hide_input
n = len(trace['beta'])
# South Korea:
ppt_sk = np.log10(51500000. / 250000)
# Compute predicted case counts per state right now
logit_now = pd.DataFrame([
pd.Series(np.random.normal((trace['alpha'][i] + trace['beta'][i] * (np.log10(df_pred['people_per_test']) - ppt_sk)),
trace['sigma'][i]), index=df_pred.index)
for i in range(len(trace['beta']))])
prob_missing_now = np.exp(logit_now) / (np.exp(logit_now) + 1)
predicted_counts_now = np.round(df_pred['positive'] / prob_missing_now.mean(axis=0)).astype(int)
predicted_counts_now_lower = np.round(df_pred['positive'] / prob_missing_now.quantile(0.975, axis=0)).astype(int)
predicted_counts_now_upper = np.round(df_pred['positive'] / prob_missing_now.quantile(0.025, axis=0)).astype(int)
case_increase_percent = list(map(lambda x, y: (((x - y) / float(y))),
predicted_counts_now, df_pred['positive']))
df_summary = pd.DataFrame(
data = {
'Cases Reported': df_pred['positive'],
'Cases Estimated': predicted_counts_now,
'Percent Increase': case_increase_percent,
'Tests per Million People': df_pred['tests_per_million'].round(1),
'Cases Estimated (range)': list(map(lambda x, y: '(%i, %i)' % (round(x), round(y)),
predicted_counts_now_lower, predicted_counts_now_upper)),
'Cases per Million': ((df_pred['positive'] / df_pred['total_population']) * 1e6),
'Positive Test Rate': (df_pred['positive'] / (df_pred['positive'] + df_pred['negative']))
},
index=df_pred.index)
from datetime import datetime
display(Markdown("## Summary for the United States on %s:" % str(datetime.today())[:10]))
display(Markdown(f"**Reported Case Count:** {df_summary['Cases Reported'].sum():,}"))
display(Markdown(f"**Predicted Case Count:** {df_summary['Cases Estimated'].sum():,}"))
case_increase_percent = 100. * (df_summary['Cases Estimated'].sum() - df_summary['Cases Reported'].sum()) / df_summary['Cases Estimated'].sum()
display(Markdown("**Percentage Underreporting in Case Count:** %.1f%%" % case_increase_percent))
#hide
df_summary.loc[:, 'Ratio'] = df_summary['Cases Estimated'] / df_summary['Cases Reported']
df_summary.columns = ['Reported Cases', 'Est Cases', '% Increase',
'Tests per Million', 'Est Range',
'Cases per Million', 'Positive Test Rate',
'Ratio']
df_display = df_summary[['Reported Cases', 'Est Cases', 'Est Range', 'Ratio',
'Tests per Million', 'Cases per Million',
'Positive Test Rate']].copy()
```
## COVID-19 Case Estimates, by State
### Definition Of Fields:
- **Reported Cases**: The number of cases reported by each state, which is a function of how many tests are positive.
- **Est Cases**: The predicted number of cases, accounting for the fact that not everyone is tested.
- **Est Range**: The 95% confidence interval of the predicted number of cases.
- **Ratio**: `Estimated Cases` divided by `Reported Cases`.
- **Tests per Million**: The number of tests administered per one million people. The less tests administered per capita, the larger the difference between reported and estimated number of cases, generally.
- **Cases per Million**: The number of **reported** cases per on million people.
- **Positive Test Rate**: The **reported** percentage of positive tests.
```
#hide_input
df_display.sort_values(
by='Est Cases', ascending=False).style.background_gradient(
cmap='Oranges').format(
{'Ratio': "{:.1f}"}).format(
{'Tests per Million': "{:.1f}"}).format(
{'Cases per Million': "{:.1f}"}).format(
{'Positive Test Rate': "{:.0%}"})
#hide_input
df_plot = df_summary.copy(deep=True)
# Compute predicted cases per million
df_plot['predicted_counts_now_pm'] = 1e6 * (
df_pred['positive'] / prob_missing_now.mean(axis=0)) / df_pred['total_population']
df_plot['predicted_counts_now_lower_pm'] = 1e6 * (
df_pred['positive'] / prob_missing_now.quantile(0.975, axis=0))/ df_pred['total_population']
df_plot['predicted_counts_now_upper_pm'] = 1e6 * (
df_pred['positive'] / prob_missing_now.quantile(0.025, axis=0))/ df_pred['total_population']
df_plot.sort_values('predicted_counts_now_pm', ascending=False, inplace=True)
xerr = [
df_plot['predicted_counts_now_pm'] - df_plot['predicted_counts_now_lower_pm'],
df_plot['predicted_counts_now_upper_pm'] - df_plot['predicted_counts_now_pm']]
fig, axs = plt.subplots(1, 1, figsize=(15, 15))
ax = plt.errorbar(df_plot['predicted_counts_now_pm'], range(len(df_plot)-1, -1, -1),
xerr=xerr, fmt='o', elinewidth=1, label='Estimate')
ax = plt.yticks(range(len(df_plot)), df_plot.index[::-1])
ax = plt.errorbar(df_plot['Cases per Million'], range(len(df_plot)-1, -1, -1),
xerr=None, fmt='.', color='k', label='Reported')
ax = plt.xlabel('COVID-19 Case Counts Per Million People', size=20)
ax = plt.legend(fontsize='xx-large', loc=4)
ax = plt.grid(linestyle='--', color='grey', axis='x')
```
## Appendix: Model Diagnostics
### Derived relationship between Test Capacity and Case Under-reporting
Plotted is the estimated relationship between test capacity (in terms of people per test -- larger = less testing) and the likelihood a COVID-19 case is reported (lower = more under-reporting of cases).
The lines represent the posterior samples from our MCMC run (note the x-axis is plotted on a log scale). The rug plot shows the current test capacity for each state (black '|') and the capacity one week ago (cyan '+'). For comparison, South Korea's testing capacity is currently at the very left of the graph (200 people per test).
```
#hide_input
# Plot pop/test vs. Prob of case detection for all posterior samples:
x = np.linspace(0.0, 4.0, 101)
logit_pcase = pd.DataFrame([
trace['alpha'][i] + trace['beta'][i] * x
for i in range(n)])
pcase = np.exp(logit_pcase) / (np.exp(logit_pcase) + 1)
fig, ax = plt.subplots(1, 1, figsize=(14, 9))
for i in range(n):
ax = plt.plot(10**(ppt_sk + x), pcase.iloc[i], color='grey', lw=.1, alpha=.5)
plt.xscale('log')
plt.xlabel('State-wise population per test', size=14)
plt.ylabel('Probability a true case is detected', size=14)
# rug plots:
ax=plt.plot(df_pred['people_per_test'], np.zeros(len(df_pred)),
marker='|', color='k', ls='', ms=20,
label='U.S. State-wise Test Capacity Now')
ax=plt.plot(df['people_per_test_7_days_ago'], np.zeros(len(df)),
marker='+', color='c', ls='', ms=10,
label='U.S. State-wise Test Capacity 7 Days Ago')
ax = plt.legend(fontsize='x-large')
```
## About this Analysis
This analysis was done by [Joseph Richards](https://twitter.com/joeyrichar).
This project[^1] uses the testing rates per state from [https://covidtracking.com/](https://covidtracking.com/), which reports case counts and mortality by state. This is used to **estimate the number of unreported (untested) COVID-19 cases in each U.S. state.**
The analysis makes a few assumptions:
1. The probability that a case is reported by a state is a function of the number of tests run per person in that state. Hence the degree of under-reported cases is a function of tests run per capita.
2. The underlying mortality rate is the same across every state.
3. Patients take time to succumb to COVID-19, so the mortality counts *today* reflect the case counts *7 days ago*. E.g., mortality rate = (cumulative deaths today) / (cumulative cases 7 days ago).
The model attempts to find the most likely relationship between state-wise test volume (per capita) and under-reporting, such that the true underlying mortality rates between the individual states are as similar as possible. The model simultaneously finds the most likely posterior distribution of mortality rates, the most likely *true* case count per state, and the test volume vs. case underreporting relationship.
[^1]: Full details about the model are available at: https://github.com/jwrichar/COVID19-mortality
| github_jupyter |
# Notebook for Quickly Recreating Publication Plots
----
CPD DM Search
```
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from matplotlib.cm import ScalarMappable
from matplotlib.colors import Normalize
from matplotlib import rc
rc('text', usetex=True)
rc('font', family="serif")
rc('axes', labelsize=12)
rc('font', size=12)
rc('legend', fontsize=10)
rc('xtick', labelsize=12)
rc('ytick', labelsize=12)
```
## Plotting Figure 1
```
fig, ax = plt.subplots(figsize=(1.5 * (3 + 3/8), 3 + 3/8))
rawpulse = np.loadtxt('figure1/rawpulse.txt', delimiter=',')
optimumfilter = np.loadtxt('figure1/optimumfilter.txt', delimiter=',')
fitresult = np.loadtxt('figure1/fitresult.txt', delimiter=',')
fpgafilter = np.loadtxt('figure1/fpgafilter.txt', delimiter=',')
ax.plot(
rawpulse[:, 0] * 1e3,
rawpulse[:, 1],
label="Raw Pulse",
color='xkcd:grey',
)
ax.plot(
optimumfilter[:, 0] * 1e3,
optimumfilter[:, 1],
label="Optimum Filter",
color='blue',
linestyle='-',
)
ax.plot(
fitresult[:, 0] * 1e3,
fitresult[:, 1],
label="Fit Result",
color='k',
linestyle='dashed',
)
ax.plot(
fpgafilter[:, 0] * 1e3,
fpgafilter[:, 1],
label='FPGA Filter',
color='red',
marker='.',
)
ax.axhline(11.77, color='k', linestyle='dotted', label='Threshold')
ax.set_xlim(
25.6,
26.3,
)
ax.set_ylim(-60, 175)
ax.tick_params(which="both", direction="in", right=True, top=True)
ax.set_ylabel("Amplitude [ADC Bins]", fontsize=12)
ax.set_xlabel("Time [ms]", fontsize=12)
fig.tight_layout()
```
## Plotting Figure 2
```
fig, ax = plt.subplots(figsize=(1.5 * (3 + 3/8), 3 + 3/8))
ax.step(*np.loadtxt('figure2/recon_energy_drde.txt', delimiter=',', unpack=True), where='mid', linestyle='-', color='k')
left, bottom, width, height = [0.4, 0.52, 0.54, 0.38]
ax2 = fig.add_axes([left, bottom, width, height])
ax2.step(*np.loadtxt('figure2/e_etf_drde.txt', delimiter=',', unpack=True), where='mid', linestyle='-', color='k')
colors = plt.get_cmap("cool_r")(np.array([0, 0.5, 1]))
ax2.axvline(
1.5,
linestyle ='--',
color=colors[0],
)
ax2.axvline(
5.9,
linestyle ='--',
color=colors[1],
)
ax2.axvline(
6.5,
linestyle ='--',
color=colors[2],
)
ax2.axvline(
(6.5 - 1.739),
linestyle ='dotted',
color='grey',
)
ax2.axvline(
(5.9 - 1.739),
linestyle = 'dotted',
color='grey',
)
ax2.grid(b=False, which='both')
ax2.set_title('')
ax2.set_ylim(1e-2, 1e4)
ax2.set_xlim(0, 7)
ax2.set_yscale('log')
ax2.tick_params(which="both", direction="in", top=True, right=True, labelsize=12)
ax2.set_ylabel('')
ax2.set_xlabel(r"Calibrated $E_\mathrm{ETF}$ [keV]", fontsize=12)
ax2.set_yticks(ticks=np.geomspace(1e-2, 1e4, num=4))
ax2.set_xticks(ticks=np.linspace(0, 7, num=8))
ax.tick_params(which="both", direction="in", right=True, top=True)
ax.grid(b=False, which='both')
ax.set_ylabel(r"$\partial R / \partial E'$ [events/($\mathrm{g}\ \mathrm{eV}\ \mathrm{day}$)]", fontsize=12)
ax.set_ylabel(r"Differential Rate [events/($\mathrm{g}\ \mathrm{eV}\ \mathrm{day}$)]", fontsize=12)
ax.set_xlabel("Reconstructed Energy, $E'$ [eV]", fontsize=12)
ax.set_title('')
ax.set_yscale('log')
ax.set_ylim(1e-2, 1e4)
ax.set_xlim(0, 236.25)
fig.tight_layout()
```
## Plotting Figure 3
```
fig, ax = plt.subplots(figsize=(3 + 3/8) * np.array([1.5, 1]))
linewidth = 2
cpd_lim = np.loadtxt('figure3/cpd_limit.txt', delimiter=',')
ax.loglog(
cpd_lim[:, 0],
cpd_lim[:, 1],
color='xkcd:light red',
linestyle='-',
linewidth=linewidth,
label="SuperCDMS-CPD"
)
ax.tick_params(which="both", direction="in", top=True, right=True, labelsize=12)
ax.grid(which="both", linestyle="dotted", color='k')
ax.set_xlim(0.02, 1.5)
ax.set_ylim(1e-39, 1e-26)
plt.yticks(ticks=np.geomspace(1e-39, 1e-26, num=39 - 26 + 1))
lgd = ax.legend(loc="lower left", fontsize=9, framealpha=1, edgecolor='k', ncol=1)
ax.set_ylabel(r"DM-Nucleon Cross Section [cm$^2$]", fontsize=12)
ax.set_xlabel(r"DM Mass [GeV/$c^2$]", fontsize=12)
fig.tight_layout()
```
## dRdE Plot
```
fig, ax = plt.subplots(figsize=(3 + 3/8) * np.array([1.5, 1]))
ax.step(*np.loadtxt('figure4/recon_energies_low_drde.txt', delimiter=',', unpack=True), where='mid', linestyle='-', color='k')
drde_files = sorted(glob('figure4/oi*'))
colors = plt.get_cmap('viridis')(np.linspace(0.1, 0.9, num=len(drde_files)))
for f, c in zip(drde_files, colors):
en, drde = np.loadtxt(f, delimiter=',', unpack=True)
ax.plot(en, drde, linestyle='dashed', color=c)
ax.set_ylim(1e-1, 1e4)
masses = [float(s.split('_')[3]) for s in sorted(glob('figure4/oi_drde_mass_*'))]
mappable = ScalarMappable(
norm=Normalize(
vmin=np.log(masses[0]),
vmax=np.log(400),
),
cmap=plt.get_cmap("viridis"),
)
mappable.set_array(masses)
cb = fig.colorbar(mappable, ax=ax)
ticks = cb.get_ticks()
cb.set_ticks(ticks)
cb.set_ticklabels(np.around(np.exp(ticks), decimals=-1).astype('int'))
cb.set_label(r"DM Mass [MeV/$c^2$]", fontsize=12)
ax.grid(b=False, which='minor')
ax.grid(which='major', color='k', linestyle='dotted')
ax.set_xlim(0, 0.039376 * 1e3)
ax.set_title('')
ax.set_ylabel(r"$\partial R / \partial E'$ [events/($\mathrm{g}\ \mathrm{eV}\ \mathrm{day}$)]")
ax.set_xlabel(r"Reconstructed Energy, $E'$ [eV]")
ax.set_yscale('log')
ax.tick_params(which="both", direction="in", right=True, top=True)
for ii, l in enumerate(ax.get_lines()):
if l.get_linestyle() == '--':
l.set_zorder(len(ax.get_lines()) - ii)
else:
l.set_zorder(0)
fig.tight_layout()
```
| github_jupyter |
# **Space X Falcon 9 First Stage Landing Prediction**
## Web scraping Falcon 9 and Falcon Heavy Launches Records from Wikipedia
we will performing web scraping to collect Falcon 9 historical launch records from a Wikipedia page titled `List of Falcon 9 and Falcon Heavy launches`
https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches
## Objectives
Web scrap Falcon 9 launch records with `BeautifulSoup`:
* Extract a Falcon 9 launch records HTML table from Wikipedia
* Parse the table and convert it into a Pandas data frame
```
import sys
import requests
# Trying to import BeautifulSoup
try:
from bs4 import BeautifulSoup
except ImportError as e:
!{sys.executable} -m pip install beautifulsoup4
from bs4 import BeautifulSoup
import re
import unicodedata
import pandas as pd
def date_time(table_cells):
"""
This function returns the data and time from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
return [data_time.strip() for data_time in list(table_cells.strings)][0:2]
def booster_version(table_cells):
"""
This function returns the booster version from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=''.join([booster_version for i,booster_version in enumerate( table_cells.strings) if i%2==0][0:-1])
return out
def landing_status(table_cells):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=[i for i in table_cells.strings][0]
return out
def get_mass(table_cells):
mass=unicodedata.normalize("NFKD", table_cells.text).strip()
if mass:
mass.find("kg")
new_mass=mass[0:mass.find("kg")+2]
else:
new_mass=0
return new_mass
def extract_column_from_header(row):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
if (row.br):
row.br.extract()
if row.a:
row.a.extract()
if row.sup:
row.sup.extract()
colunm_name = ' '.join(row.contents)
# Filter the digit and empty names
if not(colunm_name.strip().isdigit()):
colunm_name = colunm_name.strip()
return colunm_name
static_url = "https://en.wikipedia.org/w/index.php?title=List_of_Falcon_9_and_Falcon_Heavy_launches&oldid=1027686922"
```
Next, request the HTML page from the above URL and get a `response` object
### TASK 1: Request the Falcon9 Launch Wiki page from its URL
First, let's perform an HTTP GET method to request the Falcon9 Launch HTML page, as an HTTP response.
```
# use requests.get() method with the provided static_url
# assign the response to a object
page = requests.get(static_url)
page.status_code
```
Create a `BeautifulSoup` object from the HTML `response`
```
# Use BeautifulSoup() to create a BeautifulSoup object from a response text content
soup = BeautifulSoup(page.text, 'html.parser')
```
Print the page title to verify if the `BeautifulSoup` object was created properly
```
# Use soup.title attribute
soup.title
```
### TASK 2: Extract all column/variable names from the HTML table header
Next, we want to collect all relevant column names from the HTML table header
Let's try to find all tables on the wiki page first. If you need to refresh your memory about `BeautifulSoup`, please check the external reference link towards the end of this lab
```
# Use the find_all function in the BeautifulSoup object, with element type `table`
# Assign the result to a list called `html_tables`
html_tables = soup.find_all('table')
```
Starting from the third table is our target table contains the actual launch records.
```
# Let's print the third table and check its content
first_launch_table = html_tables[2]
print(first_launch_table)
```
You should able to see the columns names embedded in the table header elements `<th>` as follows:
```
<tr>
<th scope="col">Flight No.
</th>
<th scope="col">Date and<br/>time (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)
</th>
<th scope="col"><a href="/wiki/List_of_Falcon_9_first-stage_boosters" title="List of Falcon 9 first-stage boosters">Version,<br/>Booster</a> <sup class="reference" id="cite_ref-booster_11-0"><a href="#cite_note-booster-11">[b]</a></sup>
</th>
<th scope="col">Launch site
</th>
<th scope="col">Payload<sup class="reference" id="cite_ref-Dragon_12-0"><a href="#cite_note-Dragon-12">[c]</a></sup>
</th>
<th scope="col">Payload mass
</th>
<th scope="col">Orbit
</th>
<th scope="col">Customer
</th>
<th scope="col">Launch<br/>outcome
</th>
<th scope="col"><a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">Booster<br/>landing</a>
</th></tr>
```
Next, we just need to iterate through the `<th>` elements and apply the provided `extract_column_from_header()` to extract column name one by one
```
column_names = []
temp = soup.find_all('th')
for x in range(len(temp)):
try:
name = extract_column_from_header(temp[x])
if (name is not None and len(name) > 0):
column_names.append(name)
except:
pass
```
Check the extracted column names
```
print(column_names)
```
## TASK 3: Create a data frame by parsing the launch HTML tables
We will create an empty dictionary with keys from the extracted column names in the previous task. Later, this dictionary will be converted into a Pandas dataframe
```
launch_dict= dict.fromkeys(column_names)
# Remove an irrelvant column
del launch_dict['Date and time ( )']
launch_dict['Flight No.'] = []
launch_dict['Launch site'] = []
launch_dict['Payload'] = []
launch_dict['Payload mass'] = []
launch_dict['Orbit'] = []
launch_dict['Customer'] = []
launch_dict['Launch outcome'] = []
launch_dict['Version Booster']=[]
launch_dict['Booster landing']=[]
launch_dict['Date']=[]
launch_dict['Time']=[]
```
Next, we just need to fill up the `launch_dict` with launch records extracted from table rows.
Usually, HTML tables in Wiki pages are likely to contain unexpected annotations and other types of noises, such as reference links `B0004.1[8]`, missing values `N/A [e]`, inconsistent formatting, etc.
```
extracted_row = 0
#Extract each table
for table_number,table in enumerate(soup.find_all('table',"wikitable plainrowheaders collapsible")):
# get table row
for rows in table.find_all("tr"):
#check to see if first table heading is as number corresponding to launch a number
if rows.th:
if rows.th.string:
flight_number=rows.th.string.strip()
flag=flight_number.isdigit()
else:
flag=False
#get table element
row=rows.find_all('td')
#if it is number save cells in a dictonary
if flag:
extracted_row += 1
# Flight Number value
launch_dict["Flight No."].append(flight_number)
datatimelist=date_time(row[0])
# Date value
date = datatimelist[0].strip(',')
launch_dict["Date"].append(date)
# Time value
time = datatimelist[1]
launch_dict["Time"].append(time)
# Booster version
bv=booster_version(row[1])
if not(bv):
bv=row[1].a.string
launch_dict["Version Booster"].append(bv)
# Launch Site
launch_site = row[2].a.string
launch_dict["Launch site"].append(launch_site)
# Payload
payload = row[3].a.string
launch_dict["Payload"].append(payload)
# Payload Mass
payload_mass = get_mass(row[4])
launch_dict["Payload mass"].append(payload_mass)
# Orbit
orbit = row[5].a.string
launch_dict["Orbit"].append(orbit)
# Customer
customer = row[6].a.string
launch_dict["Customer"].append(customer)
# Launch outcome
launch_outcome = list(row[7].strings)[0]
launch_dict["Launch outcome"].append(launch_outcome)
# Booster landing
booster_landing = landing_status(row[8])
launch_dict["Booster landing"].append(booster_landing)
headings = []
for key,values in dict(launch_dict).items():
if key not in headings:
headings.append(key)
if values is None:
del launch_dict[key]
def pad_dict_list(dict_list, padel):
lmax = 0
for lname in dict_list.keys():
lmax = max(lmax, len(dict_list[lname]))
for lname in dict_list.keys():
ll = len(dict_list[lname])
if ll < lmax:
dict_list[lname] += [padel] * (lmax - ll)
return dict_list
pad_dict_list(launch_dict,0)
df = pd.DataFrame.from_dict(launch_dict)
df.head()
df.to_csv('spacex_web_scraped.csv', index=False)
```
#### Author : Ashlin Darius Govindasamy
| github_jupyter |
### Cleaning data associated with bills: utterances, summaries; so they are ready for input to pointer-gen model - this is the new cleaning method implementation
There are 6541 BIDs which overlap between the utterances and summaries datasets (using all the summary data). There are 359 instances in which the summaries are greater than 100 tokens in length, and 41 instances in which the summaries are greater than 201 tokens in length. In these instances, the summaries with less than 201 tokens were cut to their first 100 tokens (anything over 201 tokens is cut entirely). There are 374 instances in which the utterances are less than 70 tokens in length. In the final dataset(old) of 6000 examples, there are 865 examples of resolutions.
There are 374+127=501 instances in which the utterances are less than 100 tokens in length.
```
import json
import numpy as np
import ast
import re
import spacy
from collections import Counter,defaultdict
import warnings
warnings.filterwarnings('ignore')
nlp = spacy.load("en_core_web_sm")
with open("../data/bill_summaries.json") as summaries_file: # loading in the data
bill_summaries = json.load(summaries_file)
with open("../data/bill_utterances.json") as utterances_file:
bill_utterances = json.load(utterances_file)
ca_bill_utterances = bill_utterances['CA']
```
### Cleaning data before the processing to format which is accepted by pointer-gen model
```
def clean_bill_summaries(bill_summaries,max_summary_length=201,ignore_resolutions=False):
""" post-processing to remove bill summary entries with certain critera:
1) if the summary does not start with "This" (probable encoding error)
2) if "page 1" occurs in the text (indicates improper encoding)
3) if the text is over max_summary_length tokens in length (very long summaries indicate probable encoding error)
-for bill summaries which have ordering (" 1)"," 2)","(1)","(2)"," a)","(a)"), removes the implicit ordering
args:
summary_cutoff: the length of the summary for the text in which to keep
max_summary_length: max length of summaries in which to keep
ignore_resolutions (bool): whether to ignore resolutions and only output bills
"""
num_cutoff_counter=0 # counts the number of summaries ignored due to being too long
bill_summary_info = defaultdict(dict) # stores both summaries and utterances for each CA bill
for bid,summary in bill_summaries.items():
text = summary['text']
if "page 1" in text: # ignore this instance, indicator of encoding error
continue
if text[0:4] != "This": # relatively strong indicator that there was error in encoding
continue
if ignore_resolutions and "R" in bid: # ignore this instance if wanting to ignore resolutions
continue
tokens = [str(token) for token in nlp(text)]
if len(tokens)>max_summary_length: # ignore this instance, includes many errors in pdf encoding in which end state not reached
num_cutoff_counter += 1
continue
# removing the implicit ordering for all instances
if " 1)" in text or " 2)" in text or "(1)" in text or "(2)" in text or " a)" in text or " b)" in text or "(a)" in text or "(b)" in text:
text = re.sub(" \([0-9]\)","",text)
text = re.sub(" [0-9]\)","",text)
text = re.sub(" \([a-j]\)","",text)
text = re.sub(" [a-j]\)","",text)
tokens = [str(token) for token in nlp(text)]
bill_summary_info[bid]['summary'] = summary
bill_summary_info[bid]['summary']['text']=text # text is occasionally updated (when ordering removed)
bill_summary_info[bid]['summary_tokens'] = tokens
return bill_summary_info,num_cutoff_counter
bill_summary_info,_ = clean_bill_summaries(bill_summaries,max_summary_length=650,ignore_resolutions=False)
len(bill_summary_info)
def clean_bill_utterances(bill_summary_info,ca_bill_utterances,minimum_utterance_tokens=99,token_cutoff=1000):
""" cleans and combines the summary and utterance data
args:
bill_summary_info: holds cleaned information about bill summaries
token_cutoff: max number of tokens to consider for utterances
minimum_utterance_tokens: minimum number of utterance tokens allowable
"""
num_utterance_counter=0 # counts num. examples ignored due to utterances being too short
all_bill_info = {}
all_tokens_dict = {} # stores all tokens for a given bid (ignoring token_cutoff)
for bid in ca_bill_utterances:
if bid in bill_summary_info: # there is a summary assigned to this bill
all_utterances = [] # combining all discussions (did) for this bid together
for utterance_list in ca_bill_utterances[bid]['utterances']:
all_utterances+=utterance_list
all_token_lists = [[str(token) for token in nlp(utterance)] for utterance in all_utterances]
all_tokens = [] # getting a single stream of tokens
for token_list in all_token_lists:
all_tokens += token_list
if len(all_tokens)-len(all_token_lists)>=minimum_utterance_tokens: # ignore bids which don't have enough utterance tokens
all_tokens_dict[bid]=[token.lower() for token in all_tokens] # adding all utterance tokens
all_tokens_dict[bid]+=[token.lower() for token in bill_summary_info[bid]['summary_tokens']] # adding all summary tokens
all_bill_info[bid] = bill_summary_info[bid]
all_tokens = all_tokens[:token_cutoff] # taking up to max number of tokens
all_bill_info[bid]['utterances']=all_utterances
all_bill_info[bid]['utterance_tokens']=all_tokens
all_bill_info[bid]['resolution'] = "R" in bid
else:
num_utterance_counter += 1
return all_bill_info,all_tokens_dict,num_utterance_counter
all_bill_info,all_tokens_dict,_ = clean_bill_utterances(bill_summary_info,ca_bill_utterances,token_cutoff=500)
len(all_bill_info)
```
### Processing data to get to format which is accepted by pointer-gen model
```
### using pretrained Glove vectors
word_to_embedding = {}
with open("../data/glove.6B/glove.6B.300d.txt") as glove_file:
for line in glove_file.readlines():
values = line.split()
word = values[0]
coefs = np.asarray(values[1:],dtype='float32')
word_to_embedding[word] = coefs
print(len(word_to_embedding))
# getting all unique tokens used to get words which will be part of the fixed vocabulary
## specifically specifying that I want a vocabulary size of 30k (adding less common words up to that threshold)
all_tokens = []
for bid in all_tokens_dict:
all_tokens += all_tokens_dict[bid]
word_freq = Counter(all_tokens)
words_by_freq = (list(word_freq.items()))
words_by_freq.sort(key=lambda tup: tup[1],reverse=True) # sorting by occurance freq.
most_freq_words = [word_tup[0] for word_tup in words_by_freq if word_tup[1] >= 3]
most_freq_words += [word_tup[0] for word_tup in words_by_freq if word_tup[1] == 2 and word_tup[0] in word_to_embedding][:30000-3-len(most_freq_words)]
less_freq_words = [word_tup[0] for word_tup in words_by_freq if word_tup[1] < 2]
print(most_freq_words[0:10])
print(less_freq_words[0:10])
print(len(most_freq_words),len(less_freq_words))
## new addition to this where I store the word embeddings for the vocabulary
# assigning indices for all words, and adding <PAD>,<SENT>,<UNK> symbols
fixed_vocab_word_to_index = {"<PAD>":0,"<SENT>":1,"<UNK>":2} # for words assigned to the fixed_vocabulary
fixed_vocab_index_to_word = {0:"<PAD>",1:"<SENT>",2:"<UNK>"}
word_embeddings = [np.random.uniform(low=-0.05,high=0.05,size=300).astype("float32") for _ in range(3)]
index = 3 # starting index for all words
# assigning indices to most common words:
for word in most_freq_words:
fixed_vocab_word_to_index[word]=index
fixed_vocab_index_to_word[index]=word
index += 1
if word in word_to_embedding: # use pre-trained embedding
word_embeddings.append(word_to_embedding[word])
else: # initialize a trainable embedding
word_embeddings.append(np.random.uniform(low=-0.05,high=0.05,size=300).astype("float32"))
word_embeddings = np.stack(word_embeddings)
print(len(fixed_vocab_word_to_index),word_embeddings.shape)
## saving all of the vocabulary related information
np.save("../data/len_500_data/word_embeddings.npy",word_embeddings)
with open("../data/len_500_data/word_to_index.json","w+") as out_file:
json.dump(fixed_vocab_word_to_index,out_file)
with open("../data/len_500_data/index_to_word.json","w+") as out_file:
json.dump(fixed_vocab_index_to_word,out_file)
num_fixed_words = len(fixed_vocab_word_to_index)
token_cutoff=500 # this is the amount to pad up to for the input representation
# creating the input data representations for the model - input is padded up to a length of 500
x = [] # stores the integer/index representation for all input
x_indices = [] # stores the joint probability vector indices for all words in the input
x_indices_dicts = [] # stores the dicts for assigning words which are not in the fixed_vocabulary
att_mask = [] # stores the attention masks (0 for valid words, -np.inf for padding)
## data stores for debugging/error analysis
bill_information_dict = {} # stores summary(text),utterances(2d list of utterances),resolution(boolean) for each BID
bids = [] # stores the BIDs in sequential order
for bid in all_bill_info:
# creating representations for data store
bill_information_dict[bid] = {"summary":all_bill_info[bid]["summary"]["text"],"utterances":all_bill_info[bid]["utterances"],"resolution":all_bill_info[bid]["resolution"]}
bids.append(bid)
# creating the standard input representation:
utterance_tokens = [token.lower() for token in all_bill_info[bid]["utterance_tokens"]]
x_rep = [] # assigning indices to words, if input word not part of fixed_vocab, assign to <UNK>
for token in utterance_tokens:
if token in fixed_vocab_word_to_index:
x_rep.append(fixed_vocab_word_to_index[token])
else:
x_rep.append(fixed_vocab_word_to_index['<UNK>'])
att_mask_rep = [0 for i in range(len(x_rep))]
amount_to_pad = token_cutoff-len(x_rep)
x_rep += [0 for i in range(amount_to_pad)] # padding the input
att_mask_rep += [-np.inf for i in range(amount_to_pad)]
x.append(x_rep)
att_mask.append(att_mask_rep)
# creating the joint probability representation for the input:
## (the index in joint prob vector that each input word probability should be assigned to)
index=num_fixed_words # start index for assignment to joint_probability vector, length of fixed_vocab_size
non_vocab_dict = {} # stores all OOV words for this bid
this_x_indices = [] # joint prob vector indices for this bid
for token in utterance_tokens:
if token in fixed_vocab_word_to_index:
this_x_indices.append(fixed_vocab_word_to_index[token])
else:
if token in non_vocab_dict: # this word is OOV but has been seen before
this_x_indices.append(non_vocab_dict[token])
else: # this word is OOV and has never been seen before
non_vocab_dict[token]=index
this_x_indices.append(index)
index += 1
x_indices_dicts.append(non_vocab_dict)
this_x_indices += [0 for i in range(amount_to_pad)] # padding will be masked out in att calculation, so padding with 0 here is valid
x_indices.append(this_x_indices)
# this is the largest number of OOV words for a given bid utterances
max([len(dic) for dic in x_indices_dicts])
# creating the output representations for the model - output is padded up to a length of 101
## the last index is for <SENT> to indicate the end of decoding (assuming representation is shorter than 100 tokens)
## assuming the summary is greater than 100 tokens in length, we simply cut off the first 101 tokens
### when we do this cutoff, we do NOT include that <SENT> token as the 102nd token
## all words in output that are not in input utterances or in fixed_vocab_vector are assigned 3:<UNK>
y = [] # stores the index representations for all words in the headlines (this is never used)
loss_mask = [] # 1 for valid words, 0 for padding
decoder_x = [] # starts with 1:<SENT>, followed by y[0:len(headline)-1] (this is the input for teacher-forcing)(101x1)
y_indices = [] # index for the correct decoder prediction, in the joint-probability vector
total_oov_words = 0
resolution_bools = [] # bool, whether a given example is a resolution (False=bill); used for train_test_split
for bid_i,bid in enumerate(all_bill_info.keys()):
# creating standard output representation:
summary_tokens = [token.lower() for token in all_bill_info[bid]["summary_tokens"]]
y_rep = [] # not used in the model, stores indices using only fixed_vocab_vector
for token in summary_tokens:
if token in fixed_vocab_word_to_index:
y_rep.append(fixed_vocab_word_to_index[token])
else:
y_rep.append(fixed_vocab_word_to_index['<UNK>'])
resolution_bools.append(all_bill_info[bid]['resolution'])
## this is a new addition from before, including longer summaries, but just cutting off the text
if len(y_rep) > 100: # simply cutoff to the first 101 tokens
y_rep = y_rep[:101]
else: # append a end-of-sentence indicator
y_rep.append(fixed_vocab_word_to_index['<SENT>'])
loss_mask_rep = [1 for i in range(len(y_rep))]
decoder_x_rep = [1]+y_rep[0:len(y_rep)-1] # embedding word in input but not in fixed_vocab is currently set to <UNK>
amount_to_pad = 101-len(y_rep) # 100+1 represents final <SENT> prediction
y_rep += [0 for i in range(amount_to_pad)]
loss_mask_rep += [0 for i in range(amount_to_pad)] # cancels out loss contribution from padding
decoder_x_rep += [0 for i in range(amount_to_pad)]
# creating joint-probability representation of output:
non_vocab_dict = x_indices_dicts[bid_i]
y_indices_rep = []
for token in summary_tokens:
if token in fixed_vocab_word_to_index: # word is in fixed_vocabulary
y_indices_rep.append(fixed_vocab_word_to_index[token])
elif token in non_vocab_dict: # word is OOV but in the input utterances, use the index assigned to this word in x_indices
y_indices_rep.append(non_vocab_dict[token])
else: # word is OOV and not in input utterances
y_indices_rep.append(fixed_vocab_word_to_index["<UNK>"])
total_oov_words += 1
if len(y_indices_rep) > 100: # simply cutoff to the first 101 tokens
y_indices_rep = y_indices_rep[:101]
else: # if len <= 100, last prediction should be <SENT>
y_indices_rep.append(fixed_vocab_word_to_index['<SENT>'])
y_indices_rep += [0 for i in range(amount_to_pad)] # padding will be ignored by loss_mask
y.append(y_rep)
loss_mask.append(loss_mask_rep)
decoder_x.append(decoder_x_rep)
y_indices.append(y_indices_rep)
x = np.array(x).astype("int32")
x_indices = np.array(x_indices).astype("int32")
att_mask = np.array(att_mask).astype("float32")
loss_mask = np.array(loss_mask).astype("float32")
decoder_x = np.array(decoder_x).astype("int32")
y_indices = np.array(y_indices).astype("int32")
print(x.shape,x_indices.shape,att_mask.shape)
print(loss_mask.shape,decoder_x.shape,y_indices.shape)
bids = np.array(bids)
print(bids.shape,len(bill_information_dict))
```
#### Shuffling the data so that only bills are in the validation and test sets
```
from sklearn.utils import shuffle
x_resolution = x[resolution_bools]
x_indices_resolution = x_indices[resolution_bools]
att_mask_resolution = att_mask[resolution_bools]
loss_mask_resolution = loss_mask[resolution_bools]
decoder_x_resolution = decoder_x[resolution_bools]
y_indices_resolution = y_indices[resolution_bools]
bids_resolution = bids[resolution_bools]
bill_bools = [not res_bool for res_bool in resolution_bools] # reversal
x_bill = x[bill_bools]
x_indices_bill = x_indices[bill_bools]
att_mask_bill = att_mask[bill_bools]
loss_mask_bill = loss_mask[bill_bools]
decoder_x_bill = decoder_x[bill_bools]
y_indices_bill = y_indices[bill_bools]
bids_bill = bids[bill_bools]
print(x_resolution.shape,loss_mask_resolution.shape,bids_resolution.shape)
print(x_bill.shape,loss_mask_bill.shape,bids_bill.shape)
# shuffling only the bill data - in order to get the validation and val set data
x_bill,x_indices_bill,att_mask_bill,loss_mask_bill,decoder_x_bill,y_indices_bill,bids_bill = shuffle(x_bill,x_indices_bill,att_mask_bill,loss_mask_bill,decoder_x_bill,y_indices_bill,bids_bill,random_state=1)
x_bill_val,x_indices_bill_val,att_mask_bill_val,loss_mask_bill_val,decoder_x_bill_val,y_indices_bill_val,bids_bill_val = x_bill[:400],x_indices_bill[:400],att_mask_bill[:400],loss_mask_bill[:400],decoder_x_bill[:400],y_indices_bill[:400],bids_bill[:400]
x_bill_train,x_indices_bill_train,att_mask_bill_train,loss_mask_bill_train,decoder_x_bill_train,y_indices_bill_train,bids_bill_train = x_bill[400:],x_indices_bill[400:],att_mask_bill[400:],loss_mask_bill[400:],decoder_x_bill[400:],y_indices_bill[400:],bids_bill[400:]
print(x_bill_val.shape,loss_mask_bill_val.shape,bids_bill_val.shape)
print(x_bill_train.shape,loss_mask_bill_train.shape,bids_bill_train.shape)
## to remove resolutions, simply don't include them here
# shuffling the training set - which is a combination of bill and resolution data
x_train = np.vstack([x_bill_train,x_resolution])
x_indices_train = np.vstack([x_indices_bill_train,x_indices_resolution])
att_mask_train = np.vstack([att_mask_bill_train,att_mask_resolution])
loss_mask_train = np.vstack([loss_mask_bill_train,loss_mask_resolution])
decoder_x_train = np.vstack([decoder_x_bill_train,decoder_x_resolution])
y_indices_train = np.vstack([y_indices_bill_train,y_indices_resolution])
bids_train = np.concatenate([bids_bill_train,bids_resolution])
x_train,x_indices_train,att_mask_train,loss_mask_train,decoder_x_train,y_indices_train = shuffle(x_train,x_indices_train,att_mask_train,loss_mask_train,decoder_x_train,y_indices_train,random_state=2)
print(x_train.shape,loss_mask_train.shape,bids_train.shape)
# adding all the data together, with the final 400 instances being the val and test sets
x_final = np.vstack([x_train,x_bill_val])
x_indices_final = np.vstack([x_indices_train,x_indices_bill_val])
att_mask_final = np.vstack([att_mask_train,att_mask_bill_val])
loss_mask_final = np.vstack([loss_mask_train,loss_mask_bill_val])
decoder_x_final = np.vstack([decoder_x_train,decoder_x_bill_val])
y_indices_final = np.vstack([y_indices_train,y_indices_bill_val])
bids_final = np.concatenate([bids_train,bids_bill_val])
print(x_final.shape,loss_mask_final.shape,bids_final.shape)
## there is no final shuffling, as the last 400 datapoints represent the validation/test sets
subdir = "len_500_data"
np.save("../data/{}/x_500.npy".format(subdir),x_final)
np.save("../data/{}/x_indices_500.npy".format(subdir),x_indices_final)
np.save("../data/{}/att_mask_500.npy".format(subdir),att_mask_final)
np.save("../data/{}/loss_mask_500.npy".format(subdir),loss_mask_final)
np.save("../data/{}/decoder_x_500.npy".format(subdir),decoder_x_final)
np.save("../data/{}/y_indices_500.npy".format(subdir),y_indices_final)
np.save("../data/{}/bids_500.npy".format(subdir),bids_final)
with open("../data/len_500_data/bill_information.json","w+") as out_file:
json.dump(bill_information_dict,out_file)
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('seaborn-white')
```
## Fourier Pairs
$$
\cos(2\pi f_0 t)
\Longleftrightarrow
\frac{1}{2}\left[\delta(f - f_0) + \delta(f + f_0)\right]
$$
$$
\sin(2\pi f_0 t)
\Longleftrightarrow
\frac{1}{2i}\left[\delta(f - f_0) - \delta(f + f_0)\right]
$$
$$
\frac{1}{\sqrt{2\pi\sigma_t^2}}e^{-t^2 / (2\sigma_t^2)}
\Longleftrightarrow
e^{-f^2 / (2\sigma_f^2)};~~~~ \sigma_f = \frac{1}{2\pi\sigma_t}
$$
$$
\Pi_T(t)
\Longleftrightarrow
{\rm sinc}(fT)
$$
$$
\sum_{n=-\infty}^\infty \delta(t - nT)
\Longleftrightarrow
\frac{1}{T} \sum_{n=-\infty}^\infty \delta(f - n/T)
$$
```
from matplotlib.font_manager import FontProperties
fig, ax = plt.subplots(4, 2, figsize=(10, 6))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.02, top=0.95,
hspace=0.3, wspace=0.2)
x = np.linspace(-5, 5, 1000)
for axi in ax.flat:
axi.xaxis.set_major_formatter(plt.NullFormatter())
axi.yaxis.set_major_formatter(plt.NullFormatter())
# draw center line
axi.axvline(0, linestyle='dotted', color='gray')
axi.axhline(0, linestyle='dotted', color='gray')
style_re = dict(linestyle='solid', color='k', linewidth=2)
style_im = dict(linestyle='solid', color='gray', linewidth=2)
text_style = dict(size=14, color='gray')
# sine -> delta
ax[0, 0].plot(x, np.cos(x),**style_re)
ax[0, 0].set(xlim=(-5, 5), ylim=(-1.2, 1.2))
ax[0, 0].annotate('', (-np.pi, 0), (np.pi, 0),
arrowprops=dict(arrowstyle='|-|', color='gray'))
ax[0, 0].text(0, 0, '$1/f_0$', ha='center', va='bottom', **text_style)
ax[0, 0].set_title('Sinusoid')
ax[0, 1].plot([-5, 2, 2, 2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].plot([-5, -2, -2, -2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[0, 1].annotate('', (0, 0.4), (2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].annotate('', (0, 0.4), (-2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].text(1, 0.45, '$+f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].text(-1, 0.45, '$-f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].set_title('Delta Functions')
# gaussian -> gaussian
ax[1, 0].plot(x, np.exp(-(2 * x) ** 2), **style_re)
ax[1, 0].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 0].annotate('', (0, 0.35), (0.6, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 0].text(0, 0.4, '$\sigma$', ha='center', va='bottom', **text_style)
ax[1, 0].set_title('Gaussian')
ax[1, 1].plot(x, np.exp(-(x / 2) ** 2), **style_re)
ax[1, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 1].annotate('', (0, 0.35), (2, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 1].text(0, 0.4, '$(2\pi\sigma)^{-1}$', ha='center', va='bottom', **text_style)
ax[1, 1].set_title('Gaussian')
# top hat -> sinc
ax[2, 0].plot([-2, -1, -1, 1, 1, 2], [0, 0, 1, 1, 0, 0], **style_re)
ax[2, 0].set(xlim=(-2, 2), ylim=(-0.3, 1.2))
ax[2, 0].annotate('', (-1, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 0].text(0.0, 0.5, '$T$', ha='center', va='bottom', **text_style)
ax[2, 0].set_title('Top Hat')
ax[2, 1].plot(x, np.sinc(x), **style_re)
ax[2, 1].set(xlim=(-5, 5), ylim=(-0.3, 1.2))
ax[2, 1].annotate('', (-1, 0), (1, 0), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 1].text(0.0, 0.0, '$2/T$', ha='center', va='bottom', **text_style)
ax[2, 1].set_title('Sinc')
# comb -> comb
ax[3, 0].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 0].set(xlim=(-5.5, 5.5), ylim=(-0.2, 1.2))
ax[3, 0].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 0].text(0.5, 0.6, '$T$', ha='center', va='bottom', **text_style)
ax[3, 0].set_title('Dirac Comb')
ax[3, 1].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 1].set(xlim=(-2.5, 2.5), ylim=(-0.2, 1.2));
ax[3, 1].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 1].text(0.5, 0.6, '$1/T$', ha='center', va='bottom', **text_style)
ax[3, 1].set_title('Dirac Comb')
for i, letter in enumerate('abcd'):
ax[i, 0].set_ylabel('({0})'.format(letter), rotation=0,
fontproperties=FontProperties(size=12, style='italic'))
# Draw arrows between pairs of axes
for i in range(4):
left = ax[i, 0].bbox.inverse_transformed(fig.transFigure).bounds
right = ax[i, 1].bbox.inverse_transformed(fig.transFigure).bounds
x = 0.5 * (left[0] + left[2] + right[0])
y = left[1] + 0.5 * left[3]
fig.text(x, y, r'$\Longleftrightarrow$',
ha='center', va='center', size=30)
fig.savefig('fig03_Fourier_pairs.pdf')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Keras: A quick overview
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/guide/keras/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Import tf.keras
`tf.keras` is TensorFlow's implementation of the
[Keras API specification](https://keras.io). This is a high-level
API to build and train models that includes first-class support for
TensorFlow-specific functionality, such as [eager execution](#eager_execution),
`tf.data` pipelines, and [Estimators](./estimators.md).
`tf.keras` makes TensorFlow easier to use without sacrificing flexibility and
performance.
To get started, import `tf.keras` as part of your TensorFlow program setup:
```
!pip install pyyaml # pyyaml is optional
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow-gpu==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
```
`tf.keras` can run any Keras-compatible code, but keep in mind:
* The `tf.keras` version in the latest TensorFlow release might not be the same
as the latest `keras` version from PyPI. Check `tf.keras.__version__`.
* When [saving a model's weights](#weights_only), `tf.keras` defaults to the
[checkpoint format](./checkpoints.md). Pass `save_format='h5'` to
use HDF5.
## Build a simple model
### Sequential model
In Keras, you assemble *layers* to build *models*. A model is (usually) a graph
of layers. The most common type of model is a stack of layers: the
`tf.keras.Sequential` model.
To build a simple, fully-connected network (i.e. multi-layer perceptron):
```
from tensorflow.keras import layers
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
```
You can find a complete, short example of how to use Sequential models [here](https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/quickstart/beginner.ipynb).
To learn about building more advanced models than Sequential models, see:
- [Guide to the Keras Functional API](./functional.ipynb)
- [Guide to writing layers and models from scratch with subclassing](./custom_layers_and_models.ipynb)
### Configure the layers
There are many `tf.keras.layers` available with some common constructor
parameters:
* `activation`: Set the activation function for the layer. This parameter is
specified by the name of a built-in function or as a callable object. By
default, no activation is applied.
* `kernel_initializer` and `bias_initializer`: The initialization schemes
that create the layer's weights (kernel and bias). This parameter is a name or
a callable object. This defaults to the `"Glorot uniform"` initializer.
* `kernel_regularizer` and `bias_regularizer`: The regularization schemes
that apply the layer's weights (kernel and bias), such as L1 or L2
regularization. By default, no regularization is applied.
The following instantiates `tf.keras.layers.Dense` layers using constructor
arguments:
```
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.keras.activations.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.Constant(2.0))
```
## Train and evaluate
### Set up training
After the model is constructed, configure its learning process by calling the
`compile` method:
```
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
`tf.keras.Model.compile` takes three important arguments:
* `optimizer`: This object specifies the training procedure. Pass it optimizer
instances from the `tf.keras.optimizers` module, such as
`tf.keras.optimizers.Adam` or
`tf.keras.optimizers.SGD`. If you just want to use the default parameters, you can also specify optimizers via strings, such as `'adam'` or `'sgd'`.
* `loss`: The function to minimize during optimization. Common choices include
mean square error (`mse`), `categorical_crossentropy`, and
`binary_crossentropy`. Loss functions are specified by name or by
passing a callable object from the `tf.keras.losses` module.
* `metrics`: Used to monitor training. These are string names or callables from
the `tf.keras.metrics` module.
* Additionally, to make sure the model trains and evaluates eagerly, you can make sure to pass `run_eagerly=True` as a parameter to compile.
The following shows a few examples of configuring a model for training:
```
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()])
```
### Train from NumPy data
For small datasets, use in-memory [NumPy](https://www.numpy.org/)
arrays to train and evaluate a model. The model is "fit" to the training data
using the `fit` method:
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
```
`tf.keras.Model.fit` takes three important arguments:
* `epochs`: Training is structured into *epochs*. An epoch is one iteration over
the entire input data (this is done in smaller batches).
* `batch_size`: When passed NumPy data, the model slices the data into smaller
batches and iterates over these batches during training. This integer
specifies the size of each batch. Be aware that the last batch may be smaller
if the total number of samples is not divisible by the batch size.
* `validation_data`: When prototyping a model, you want to easily monitor its
performance on some validation data. Passing this argument—a tuple of inputs
and labels—allows the model to display the loss and metrics in inference mode
for the passed data, at the end of each epoch.
Here's an example using `validation_data`:
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
```
### Train from tf.data datasets
Use the [Datasets API](../data.md) to scale to large datasets
or multi-device training. Pass a `tf.data.Dataset` instance to the `fit`
method:
```
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
```
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number of
training steps the model runs before it moves to the next epoch. Since the
`Dataset` yields batches of data, this snippet does not require a `batch_size`.
Datasets can also be used for validation:
```
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32)
model.fit(dataset, epochs=10,
validation_data=val_dataset)
```
### Evaluate and predict
The `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPy
data and a `tf.data.Dataset`.
To *evaluate* the inference-mode loss and metrics for the data provided:
```
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
```
And to *predict* the output of the last layer in inference for the data provided,
as a NumPy array:
```
result = model.predict(data, batch_size=32)
print(result.shape)
```
For a complete guide on training and evaluation, including how to write custom training loops from scratch, see the [Guide to Training & Evaluation](./training_and_evaluation.ipynb).
## Build advanced models
### Functional API
The `tf.keras.Sequential` model is a simple stack of layers that cannot
represent arbitrary models. Use the
[Keras functional API](./functional.ipynb)
to build complex model topologies such as:
* Multi-input models,
* Multi-output models,
* Models with shared layers (the same layer called several times),
* Models with non-sequential data flows (e.g. residual connections).
Building a model with the functional API works like this:
1. A layer instance is callable and returns a tensor.
2. Input tensors and output tensors are used to define a `tf.keras.Model`
instance.
3. This model is trained just like the `Sequential` model.
The following example uses the functional API to build a simple, fully-connected
network:
```
inputs = tf.keras.Input(shape=(32,)) # Returns an input placeholder
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
```
Instantiate the model given inputs and outputs.
```
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
```
### Model subclassing
Build a fully-customizable model by subclassing `tf.keras.Model` and defining
your own forward pass. Create layers in the `__init__` method and set them as
attributes of the class instance. Define the forward pass in the `call` method.
Model subclassing is particularly useful when
[eager execution](./eager.md) is enabled, because it allows the forward pass
to be written imperatively.
Note: to make sure the forward pass is *always* run imperatively, you must set `dynamic=True` when calling the super constructor.
Key Point: Use the right API for the job. While model subclassing offers
flexibility, it comes at a cost of greater complexity and more opportunities for
user errors. If possible, prefer the functional API.
The following example shows a subclassed `tf.keras.Model` using a custom forward
pass that does not have to be run imperatively:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
```
Instantiate the new model class:
```
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
### Custom layers
Create a custom layer by subclassing `tf.keras.layers.Layer` and implementing
the following methods:
* `__init__`: Optionally define sublayers to be used by this layer.
* `build`: Create the weights of the layer. Add weights with the `add_weight`
method.
* `call`: Define the forward pass.
* Optionally, a layer can be serialized by implementing the `get_config` method
and the `from_config` class method.
Here's an example of a custom layer that implements a `matmul` of an input with
a kernel matrix:
```
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
```
Create a model using your custom layer:
```
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
Learn more about creating new layers and models from scratch with subclassing in the [Guide to writing layers and models from scratch](./custom_layers_and_models.ipynb).
## Callbacks
A callback is an object passed to a model to customize and extend its behavior
during training. You can write your own custom callback, or use the built-in
`tf.keras.callbacks` that include:
* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at
regular intervals.
* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning
rate.
* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation
performance has stopped improving.
* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using
[TensorBoard](https://tensorflow.org/tensorboard).
To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
```
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
```
## Save and restore
### Weights only
Save and load the weights of a model using `tf.keras.Model.save_weights`:
```
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
```
By default, this saves the model's weights in the
[TensorFlow checkpoint](../checkpoints.md) file format. Weights can
also be saved to the Keras HDF5 format (the default for the multi-backend
implementation of Keras):
```
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
```
### Configuration only
A model's configuration can be saved—this serializes the model architecture
without any weights. A saved configuration can recreate and initialize the same
model, even without the code that defined the original model. Keras supports
JSON and YAML serialization formats:
```
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
```
Recreate the model (newly initialized) from the JSON:
```
fresh_model = tf.keras.models.model_from_json(json_string)
```
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
```
yaml_string = model.to_yaml()
print(yaml_string)
```
Recreate the model from the YAML:
```
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
```
Caution: Subclassed models are not serializable because their architecture is
defined by the Python code in the body of the `call` method.
### Entire model
The entire model can be saved to a file that contains the weight values, the
model's configuration, and even the optimizer's configuration. This allows you
to checkpoint a model and resume training later—from the exact same
state—without access to the original code.
```
# Create a simple model
model = tf.keras.Sequential([
layers.Dense(10, activation='softmax', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
```
Learn more about saving and serialization for Keras models in the guide to [save and serialize models](./saving_and_serializing.ipynb).
## Eager execution
[Eager execution](./eager.md) is an imperative programming
environment that evaluates operations immediately. This is not required for
Keras, but is supported by `tf.keras` and useful for inspecting your program and
debugging.
All of the `tf.keras` model-building APIs are compatible with eager execution.
And while the `Sequential` and functional APIs can be used, eager execution
especially benefits *model subclassing* and building *custom layers*—the APIs
that require you to write the forward pass as code (instead of the APIs that
create models by assembling existing layers).
See the [eager execution guide](./eager.ipynb#build_a_model) for
examples of using Keras models with custom training loops and `tf.GradientTape`.
You can also find a complete, short example [here](https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/quickstart/advanced.ipynb).
## Distribution
### Multiple GPUs
`tf.keras` models can run on multiple GPUs using
`tf.distribute.Strategy`. This API provides distributed
training on multiple GPUs with almost no changes to existing code.
Currently, `tf.distribute.MirroredStrategy` is the only supported
distribution strategy. `MirroredStrategy` does in-graph replication with
synchronous training using all-reduce on a single machine. To use
`distribute.Strategy`s , nest the optimizer instantiation and model construction and compilation in a `Strategy`'s `.scope()`, then
train the model.
The following example distributes a `tf.keras.Model` across multiple GPUs on a
single machine.
First, define a model inside the distributed strategy scope:
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.SGD(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
Next, train the model on data as usual:
```
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=1024).batch(32)
model.fit(dataset, epochs=1)
```
For more information, see the [full guide on Distributed Training in TensorFlow](../distribute_strategy.ipynb).
| github_jupyter |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*Quantum Counterfeit Coin Problem*_
The latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial.
***
### Contributors
Rudy Raymond, Takashi Imamichi
## Introduction
The counterfeit coin problem is a classic puzzle first proposed by E. D. Schell in the January 1945 edition of the *American Mathematical Monthly*:
>You have eight similar coins and a beam balance. At most one coin is counterfeit and hence underweight. How can you detect whether there is an underweight coin, and if so, which one, using the balance only twice?
The answer to the above puzzle is affirmative. What happens when we can use a quantum beam balance?
Given a quantum beam balance and a counterfeit coin among $N$ coins, there is a quantum algorithm that can find the counterfeit coin by using the quantum balance only once (and independent of $N$, the number of coins!). On the other hand, any classical algorithm requires at least $\Omega(\log{N})$ uses of the beam balance. In general, for a given $k$ counterfeit coins of the same weight (but different from the majority of normal coins), there is [a quantum algorithm](https://arxiv.org/pdf/1009.0416.pdf) that queries the quantum beam balance for $O(k^{1/4})$ in contrast to any classical algorithm that requires $\Omega(k\log{(N/k)})$ queries to the beam balance. This is one of the wonders of quantum algorithms, in terms of query complexity that achieves quartic speed-up compared to its classical counterpart.
## Quantum Procedure
Hereafter we describe a step-by-step procedure to program the Quantum Counterfeit Coin Problem for $k=1$ counterfeit coin with the IBM Q Experience. [Terhal and Smolin](https://arxiv.org/pdf/quant-ph/9705041.pdf) were the first to show that it is possible to identify the false coin with a single query to the quantum beam balance.
### Preparing the environment
First, we prepare the environment.
```
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# useful math functions
from math import pi, cos, acos, sqrt
# importing the QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import available_backends, execute, register, get_backend
import sys, getpass
try:
sys.path.append("../../") # go to parent dir
import Qconfig
qx_config = {
"APItoken": Qconfig.APItoken,
"url": Qconfig.config['url']}
print('Qconfig loaded from %s.' % Qconfig.__file__)
except:
APItoken = getpass.getpass('Please input your token and hit enter: ')
qx_config = {
"APItoken": APItoken,
"url":"https://quantumexperience.ng.bluemix.net/api"}
print('Qconfig.py not found in qiskit-tutorial directory; Qconfig loaded using user input.')
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
```
### Setting the number of coins and the index of false coin
Next, we set the number of coins and the index of the counterfeit coin. The former determines the quantum superpositions used by the algorithm, while the latter determines the quantum beam balance.
```
M = 16 # Maximum number of physical qubits available
numberOfCoins = 8 # This number should be up to M-1, where M is the number of qubits available
indexOfFalseCoin = 6 # This should be 0, 1, ..., numberOfCoins - 1, where we use Python indexing
if numberOfCoins < 4 or numberOfCoins >= M:
raise Exception("Please use numberOfCoins between 4 and ", M-1)
if indexOfFalseCoin < 0 or indexOfFalseCoin >= numberOfCoins:
raise Exception("indexOfFalseCoin must be between 0 and ", numberOfCoins-1)
```
### Querying the quantum beam balance
As in a classical algorithm to find the false coin, we will use the balance by placing the same number of coins on the left and right pans of the beam. The difference is that in a quantum algorithm, we can query the beam balance in superposition. To query the quantum beam balance, we use a binary query string to encode coins placed on the pans; namely, the binary string `01101010` means to place coins whose indices are 1, 2, 4, and 6 on the pans, while the binary string `01110111` means to place all coins but those with indices 0 and 4 on the pans. Notice that we do not care how the selected coins are placed on the left and right pans, because their results are the same: it is balanced when no false coin is included, and tilted otherwise.
In our example, because the number of coins is $8$ and the index of false coin is $3$, the query `01101010` will result in balanced (or, $0$), while the query `01110111` will result in tilted (or, $1$). Using two quantum registers to query the quantum balance, where the first register is for the query string and the second register for the result of the quantum balance, we can write the query to the quantum balance (omitting the normalization of the amplitudes):
\begin{eqnarray}
|01101010\rangle\Big( |0\rangle - |1\rangle \Big) &\xrightarrow{\mbox{Quantum Beam Balance}}& |01101010\rangle\Big( |0\oplus 0\rangle - |1 \oplus 0\rangle \Big) = |01101010\rangle\Big( |0\rangle - |1\rangle \Big)\\
|01110111\rangle\Big( |0\rangle - |1\rangle \Big) &\xrightarrow{\mbox{Quantum Beam Balance}}& |01110111\rangle\Big( |0 \oplus 1\rangle - |1 \oplus 1\rangle \Big) = (-1) |01110111\rangle\Big( |0 \rangle - |1 \rangle \Big)
\end{eqnarray}
Notice that in the above equation, the phase is flipped if and only if the binary query string is $1$ at the index of the false coin. Let $x \in \left\{0,1\right\}^N$ be the $N$-bit query string (that contains even number of $1$s), and let $e_k \in \left\{0,1\right\}^N$ be the binary string which is $1$ at the index of the false coin and $0$ otherwise. Clearly,
$$
|x\rangle\Big(|0\rangle - |1\rangle \Big) \xrightarrow{\mbox{Quantum Beam Balance}} \left(-1\right) ^{x\cdot e_k} |x\rangle\Big(|0\rangle - |1\rangle \Big),
$$
where $x\cdot e_k$ denotes the inner product of $x$ and $e_k$.
Here, we will prepare the superposition of all binary query strings with even number of $1$s. Namely, we want a circuit that produces the following transformation:
$$
|0\rangle \rightarrow \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle,
$$
where $|x|$ denotes the Hamming weight of $x$.
To obtain such superposition of states of even number of $1$s, we can perform Hadamard transformation on $|0\rangle$ to obtain superposition of $\sum_{x\in\left\{0,1\right\}^N} |x\rangle$, and check if the Hamming weight of $x$ is even. It can be shown that the Hamming weight of $x$ is even if and only if $x_1 \oplus x_2 \oplus \ldots \oplus x_N = 0$. Thus, we can transform:
\begin{equation}
|0\rangle|0\rangle \xrightarrow{H^{\oplus N}} \frac{1}{2^{N/2}}\sum_x |x\rangle |0\rangle \xrightarrow{\mbox{XOR}(x)} \frac{1}{2^{N/2}}\sum_x |x\rangle |0\oplus x_1 \oplus x_2 \oplus \ldots \oplus x_N\rangle
\end{equation}
The right-hand side of the equation can be divided based on the result of the $\mbox{XOR}(x) = x_1 \oplus \ldots \oplus x_N$, namely,
$$
\frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle|0\rangle + \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 1 \mod 2} |x\rangle|1\rangle.
$$
Thus, if we measure the second register and observe $|0\rangle$, the first register is the superposition of all binary query strings we want. If we fail (observe $|1\rangle$), we repeat the above procedure until we observe $|0\rangle$. Each repetition is guaranteed to succeed with probability exactly half. Hence, after several repetitions we should be able to obtain the desired superposition state. *Notice that we can perform [quantum amplitude amplification](https://arxiv.org/abs/quant-ph/0005055) to obtain the desired superposition states with certainty and without measurement. The detail is left as an exercise*.
Below is the procedure to obtain the desired superposition state with the classical `if` of the QuantumProgram. Here, when the second register is zero, we prepare it to record the answer to quantum beam balance.
```
# Creating registers
# numberOfCoins qubits for the binary query string and 1 qubit for working and recording the result of quantum balance
qr = QuantumRegister(numberOfCoins+1)
# for recording the measurement on qr
cr = ClassicalRegister(numberOfCoins+1)
circuitName = "QueryStateCircuit"
queryStateCircuit = QuantumCircuit(qr, cr)
N = numberOfCoins
# Create uniform superposition of all strings of length N
for i in range(N):
queryStateCircuit.h(qr[i])
# Perform XOR(x) by applying CNOT gates sequentially from qr[0] to qr[N-1] and storing the result to qr[N]
for i in range(N):
queryStateCircuit.cx(qr[i], qr[N])
# Measure qr[N] and store the result to cr[N]. We continue if cr[N] is zero, or repeat otherwise
queryStateCircuit.measure(qr[N], cr[N])
# we proceed to query the quantum beam balance if the value of cr[0]...cr[N] is all zero
# by preparing the Hadamard state of |1>, i.e., |0> - |1> at qr[N]
queryStateCircuit.x(qr[N]).c_if(cr, 0)
queryStateCircuit.h(qr[N]).c_if(cr, 0)
# we rewind the computation when cr[N] is not zero
for i in range(N):
queryStateCircuit.h(qr[i]).c_if(cr, 2**N)
```
### Constructing the quantum beam balance
The quantum beam balance returns $1$ when the binary query string contains the position of the false coin and $0$ otherwise, provided that the Hamming weight of the binary query string is even. Notice that previously, we constructed the superposition of all binary query strings whose Hamming weights are even. Let $k$ be the position of the false coin, then with regards to the binary query string $|x_1,x_2,\ldots,x_N\rangle|0\rangle$, the quantum beam balance simply returns $|x_1,x_2,\ldots,x_N\rangle|0\oplus x_k\rangle$, that can be realized by a CNOT gate with $x_k$ as control and the second register as target. Namely, the quantum beam balance realizes
$$
|x_1,x_2,\ldots,x_N\rangle\Big(|0\rangle - |1\rangle\Big) \xrightarrow{\mbox{Quantum Beam Balance}} |x_1,x_2,\ldots,x_N\rangle\Big(|0\oplus x_k\rangle - |1 \oplus x_k\rangle\Big) = \left(-1\right)^{x\cdot e_k} |x_1,x_2,\ldots,x_N\rangle\Big(|0\rangle - |1\rangle\Big)
$$
Below we apply the quantum beam balance on the desired superposition state.
```
k = indexOfFalseCoin
# Apply the quantum beam balance on the desired superposition state (marked by cr equal to zero)
queryStateCircuit.cx(qr[k], qr[N]).c_if(cr, 0)
```
### Identifying the false coin
In the above, we have queried the quantum beam balance once. How to identify the false coin after querying the balance? We simply perform a Hadamard transformation on the binary query string to identify the false coin. Notice that, under the assumption that we query the quantum beam balance with binary strings of even Hamming weight, the following equations hold.
\begin{eqnarray}
\frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} |x\rangle &\xrightarrow{\mbox{Quantum Beam Balance}}& \frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} \left(-1\right)^{x\cdot e_k} |x\rangle\\
\frac{1}{2^{(N-1)/2}}\sum_{x\in \left\{0,1\right\}^N~\mbox{and}~|x|\equiv 0 \mod 2} \left(-1\right)^{x\cdot e_k} |x\rangle&\xrightarrow{H^{\otimes N}}& \frac{1}{\sqrt{2}}\Big(|e_k\rangle+|\hat{e_k}\rangle\Big)
\end{eqnarray}
In the above, $e_k$ is the bitstring that is $1$ only at the position of the false coin, and $\hat{e_k}$ is its inverse. Thus, by performing the measurement in the computational basis after the Hadamard transform, we should be able to identify the false coin because it is the one whose label is different from the majority: when $e_k$, the false coin is labelled $1$, and when $\hat{e_k}$ the false coin is labelled $0$.
```
# Apply Hadamard transform on qr[0] ... qr[N-1]
for i in range(N):
queryStateCircuit.h(qr[i]).c_if(cr, 0)
# Measure qr[0] ... qr[N-1]
for i in range(N):
queryStateCircuit.measure(qr[i], cr[i])
```
Now we perform the experiment to see how we can identify the false coin by the above quantum circuit. Notice that when we use the `plot_histogram`, the numbering of the bits in the classical register is from right to left, namely, `0100` means the bit with index $2$ is one and the rest are zero.
Because we use `cr[N]` to control the operation prior to and after the query to the quantum beam balance, we can detect that we succeed in identifying the false coin when the left-most bit is $0$. Otherwise, when the left-most bit is $1$, we fail to obtain the desired superposition of query bitstrings and must repeat from the beginning. *Notice that we have not queried the quantum beam oracle yet. This repetition is not neccesary when we feed the quantum beam balance with the superposition of all bitstrings of even Hamming weight, which can be done with probability one, thanks to the quantum amplitude amplification*.
When the left-most bit is $0$, the index of the false coin can be determined by finding the one whose values are different from others. Namely, when $N=8$ and the index of the false coin is $3$, we should observe `011110111` or `000001000`.
```
backend = "local_qasm_simulator"
shots = 1 # We perform a one-shot experiment
results = execute(queryStateCircuit, backend=backend, shots=shots).result()
answer = results.get_counts()
for key in answer.keys():
if key[0:1] == "1":
raise Exception("Fail to create desired superposition of balanced query string. Please try again")
plot_histogram(answer)
from collections import Counter
for key in answer.keys():
normalFlag, _ = Counter(key[1:]).most_common(1)[0] #get most common label
for i in range(2,len(key)):
if key[i] != normalFlag:
print("False coin index is: ", len(key) - i - 1)
```
## About Quantum Counterfeit Coin Problem
The case when there is a single false coin, as presented in this notebook, is essentially [the Bernstein-Vazirani algorithm](http://epubs.siam.org/doi/abs/10.1137/S0097539796300921), and the single-query coin-weighing algorithm was first presented in 1997 by [Terhal and Smolin](https://arxiv.org/pdf/quant-ph/9705041.pdf). The Quantum Counterfeit Coin Problem for $k > 1$ in general is studied by [Iwama et al.](https://arxiv.org/pdf/1009.0416.pdf) Whether there exists a quantum algorithm that only needs $o(k^{1/4})$ queries to identify all the false coins remains an open question.
| github_jupyter |
<a href="https://colab.research.google.com/github/bearpelican/musicautobot/blob/master/Generate.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!git clone https://github.com/bearpelican/musicautobot.git
import os
os.chdir('musicautobot')
!nvidia-smi
!apt install musescore fluidsynth
!cp /usr/share/sounds/sf2/FluidR3_GM.sf2 ./font.sf2
!pip install torch fastai music21 pebble fluidsynth midi2audio
from musicautobot.numpy_encode import *
from musicautobot.utils.file_processing import process_all, process_file
from musicautobot.config import *
from musicautobot.music_transformer import *
from musicautobot.multitask_transformer import *
from musicautobot.numpy_encode import stream2npenc_parts
from musicautobot.utils.setup_musescore import setup_musescore
setup_musescore()
from midi2audio import FluidSynth
from IPython.display import Audio
# Colab cannot play music directly from music21 - must convert to .wav first
def play_wav(stream):
out_midi = stream.write('midi')
out_wav = str(Path(out_midi).with_suffix('.wav'))
FluidSynth("font.sf2").midi_to_audio(out_midi, out_wav)
return Audio(out_wav)
```
# Generate Music with Pretrained Model
### Load Pretrained
```
# Config
config = multitask_config();
# Location of your midi files
midi_path = Path('data/midi')
# Location of saved datset
data_path = Path('data/numpy')
data_save_name = 'musicitem_data_save.pkl'
# Data
data = MusicDataBunch.empty(data_path)
vocab = data.vocab
# Pretrained Model
# Download pretrained model if you haven't already
pretrained_url = 'https://ashaw-midi-web-server.s3-us-west-2.amazonaws.com/pretrained/MultitaskSmallKeyC.pth'
# pretrained_url = 'https://ashaw-midi-web-server.s3-us-west-2.amazonaws.com/pretrained/MultitaskSmall.pth'
pretrained_path = data_path/'pretrained'/Path(pretrained_url).name
pretrained_path.parent.mkdir(parents=True, exist_ok=True)
download_url(pretrained_url, dest=pretrained_path)
# Learner
learn = multitask_model_learner(data, pretrained_path=pretrained_path)
# learn.to_fp16();
```
### Choose existing midi file as a starting point
```
example_dir = midi_path/'examples'
midi_files = get_files(example_dir, recurse=True, extensions='.mid'); midi_files[:5]
file = midi_files[3]; file
# Encode file
item = MusicItem.from_file(file, data.vocab)
x = item.to_tensor()
x_pos = item.get_pos_tensor()
item.show()
# item.play()
play_wav(item.stream)
```
## Generate
MultitaskTransformer trains on 3 separate tasks.
1. NextWord
2. Mask
3. Sequence to Sequence
Because we train on 3 separate tasks, we can actually generate some really cool note sequences.
1. NextWord/Autocomplete - Take a sequence of notes and predict the next note
* 1a. Vanilla Language Model predictions - See [MusicTransformer](../music_transformer) project
2. Mask/Remix - Mask certain parts of song and remix those portions.
* 2a. Note Masking - Mask all the note pitches and create a new sequence with different notes, but same exact rhythm
* 2b. Duration Masking - Mask the note durations. Generate a new sequence with the same melody, but with a different rhythm
3. Seq2Seq/Translation - Generate melody from chords or vice versa.
* 3a. New Melody - Generate a new melody from existing chords
* 3b. Harmonization - Generate chords to acompany an existing melody
## 1. NextWord/Autocomplete
Trim the song to only a few notes. Model will use these notes a seed and continue the idea
```
seed_len = 6 # 4 beats = 1 bar
seed = item.trim_to_beat(seed_len)
seed.show()
pred_nw, full = learn.predict_nw(seed, n_words=200)
pred_nw.show()
play_wav(pred_nw.stream)
```
Add more randomness
```
pitch_temp = 1.4 # randomness of melody
tempo_temp = 1.0 # randomness or rhythm
top_k = 40
pred_nw_rand, full = learn.predict_nw(seed, temperatures=(pitch_temp, tempo_temp), top_k=top_k, top_p=0.5)
pred_nw_rand.show()
play_wav(pred_nw_rand.stream)
# Convenience function
# out = nw_predict_from_midi(learn, file, seed_len=seed_len, top_k=30, top_p=0.5); out.show()
```
## 2. Seq2Seq/Translation
Load MultitrackItem.
MultitrackItem keeps track of which notes are part of the melody and which notes are part of the chords.
This info is needed for translation task
```
multitrack_item = MultitrackItem.from_file(file, vocab)
melody, chords = multitrack_item.melody, multitrack_item.chords
melody.show()
chords.show()
multitrack_item.play()
play_wav(multitrack_item.stream)
```
## 2a. Create Melody
Use existing chord progression to generate a new melody
```
# Use a seed for the melody
partial_melody = melody.trim_to_beat(4)
# Or generate from an empty sequence
empty_melody = MusicItem.empty(vocab, seq_type=SEQType.Melody)
seed_melody = empty_melody; seed_melody.show()
pred_melody = learn.predict_s2s(chords, seed_melody, use_memory=True)
pred_melody.show()
play_wav(pred_melody.stream)
combined = MultitrackItem(pred_melody, chords)
combined.show()
play_wav(combined.stream)
```
## 2b. Harmonization
Generate chords to accompany an existing melody
```
# partial_chords = chords.trim_to_beat(3);
# partial_chords.show()
empty_chords = MusicItem.empty(vocab, seq_type=SEQType.Chords); empty_chords.show()
pred_chord = learn.predict_s2s(input_item=melody, target_item=empty_chords)
pred_chord.show()
combined = MultitrackItem(melody, pred_chord)
combined.show()
play_wav(combined.stream)
# Convenience Function
# out = s2s_predict_from_midi(learn, file, seed_len=10); out.show()
```
## 3. Mask/Remix
### 3a. Remix Notes
Mask all the note pitches. Model will create a new song with the same rhythm
```
### Mask notes
note_item = item.mask_pitch();
# Mask vs Original
list(zip(note_item.to_text(None)[:20], item.to_text(None)[:20]))
pred_note = learn.predict_mask(note_item, temperatures=(1.4, 1.0))
pred_note.show()
play_wav(pred_note.stream)
```
### 3b. Remix rhythm
Mask note durations. Same notes, different rhythm
```
# duration mask
dur_item = item.mask_duration()
# Mask vs Original
list(zip(dur_item.to_text(None)[:10], item.to_text(None)[:10]))
dur_pred = learn.predict_mask(dur_item, temperatures=(0.8,0.8), top_k=40, top_p=0.6)
dur_pred.show()
play_wav(dur_pred.stream)
# Convenience function
# out = mask_predict_from_midi(learn, file, predict_notes=True)
```
| github_jupyter |
# Adding features
## General principles
niimpy is an open source project and general open source contribution guidelines apply - there is no need for us to repeat them right now. Please use Github for communication.
Contributions are welcome and encouraged.
* You don't need to be perfect. Suggest what you can and we will help it improve.
## Adding an analysis
* Please add documentatation to a sensor page when you add a new analysis. This should include enough description so that someone else can understand and reproduce all relevant features - enough to describe the method for a scientific article.
* Please add unit tests which test each relevant feature (and each claimed method feature) with a minimal example. Each function can have multiple tests. For examples of unit tests, see below or ``niimpy/test_screen.py``. You can create some sample data within each test module which can be used both during development and for tests.
## Common things to note
* You should always use the DataFrame index to retrieve data/time values, not the `datetime` column (which is a convenience thing but not guaranteed to be there).
* Don't require `datetime` in your input
* Have any times returned in the index (unless each row needs multiple times, then do what you need)
* Don't fail if there are extra columns passed (or missing some non-essential columns). Look at what columns/data is passed and and use that, but don't do anything unexpected if someone makes a mistake with input data
* Group by 'user' and 'device' columns if they are present in the input
* Use `niimpy.util._read_sqlite_auto` function for getting data from input
* Use `niimpy.filter.filter_dataframe` to do basic initial filterings based on standard arguments.
* [The Zen of Python](https://www.python.org/dev/peps/pep-0020/) is always good advice
## Improving old functions
- Add tests for existing functionality
- For every functionality it claims, there should be a minimal test for it.
- Use `read._get_dataframe` and `filter.filter_dataframe` to handle standard arguments
- Don't fail if unnecessary columns are not there (don't drop unneeded columns, select only the needed ones).
- Make sure it uses the index, not the `datetime` column. Some older functions mays still expect it so we have a difficult challenge.
- Improve the docstring of the function: we use the [numpydoc format](https://numpydoc.readthedocs.io/en/latest/format.html)
- Add a documentation page for these sensors, document each function and include an example.
- Document what parameters it groups by when analyzing
- For example an ideal case is that any 'user' and 'device' columns are grouped by in the final output.
- When there are things that don't work yet, you can put a TODO in the docstring to indicate that someone should come back to it later.
## Example unit test
You can read about testing in general in the [CodeRefinery testing lesson](https://coderefinery.github.io/testing/).
First you would define some sample data. You could reuse existing data (or data from `niimpy.sampledata`), but if data is reused too much then it becomes hard to improve test B because it will affect the data of test A. (do share data when possible but split it when it's relevant).
```python
@pytest.fixture
def screen1():
return niimpy.read_csv(io.StringIO("""\
time,screen_status
0,1
60,0
"""))
```
Then you can make a test function:
```python
def test_screen_off(screen1):
off = niimpy.preprocess.screen_off(screen1)
assert pd.Timestamp(60, unit='s', tz=TZ) in off.index
```
`assert` statemnts run the tested functions - when there are errors `pytest` will provide much more useful error messages than you might expect. You can have multiple asserts within a function, to test multiple things.
You run tests with `pytest niimpy/` or `pytest niimpy/test_screen.py`. You can limit to certain tests with `-k` and engage a debugger on errors with `--pdb`.
## Documentation notes
- You can use Jupyter or ReST. ReST is better for narritive documentation.
| github_jupyter |
```
%run ../../main.py
%matplotlib inline
from pyarc import CBA
from pyarc.algorithms import generateCARs, M1Algorithm, M2Algorithm
from pyarc.algorithms import createCARs
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from itertools import combinations
import itertools
import pandas as pd
import numpy
import re
movies = pd.read_csv("../data/movies.csv", sep=";")
movies_discr = movies.copy(True)
budget_bins = range(0, 350, 50)
budget_bins_names = [ "<{0};{1})".format(i, i + 50) for i in budget_bins[:-1] ]
celebrities_bins = range(0, 10, 2)
celebrities_bins_names = [ "<{0};{1})".format(i, i + 2) for i in celebrities_bins[:-1] ]
movies_discr['estimated-budget'] = pd.cut(movies['estimated-budget'], budget_bins, labels=budget_bins_names)
movies_discr['a-list-celebrities'] = pd.cut(movies['a-list-celebrities'], celebrities_bins, labels=celebrities_bins_names)
movies_discr.to_csv("../data/movies_discr.csv", sep=";")
transactionDB = TransactionDB.from_DataFrame(movies_discr, unique_transactions=True)
rules = generateCARs(transactionDB, support=5, confidence=50)
movies_vals = movies.get_values()
x = range(0, 350, 50)
y = range(1, 9)
x_points = list(map(lambda n: n[0], movies_vals))
y_points = list(map(lambda n: n[1], movies_vals))
data_class = list(movies['class'])
appearance = {
'box-office-bomb': ('brown', "o"),
'main-stream-hit': ('blue', "o"),
'critical-success': ('green', "o")
}
rule_appearance = {
'box-office-bomb': 'tan',
'main-stream-hit': 'aqua',
'critical-success': 'lightgreen'
}
plt.style.use('seaborn-white')
rules
len(transactionDB)
def plot_rule(rule, plt):
interval_regex = "<(\d+);(\d+)\)"
lower_y = 0
area_y = celebrities_bins[-1]
lower_x = 0
area_x = budget_bins[-1]
if len(rule.antecedent) != 0:
if rule.antecedent[0][0] == "a-list-celebrities":
y = rule.antecedent[0]
y_boundaries = re.search(interval_regex, y[1])
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = rule.antecedent[0]
x_boundaries = re.search(interval_regex, x[1])
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
if len(rule.antecedent) > 1:
if rule.antecedent[1][0] == "a-list-celebrities":
y = rule.antecedent[1]
y_boundaries = re.search(interval_regex, y[1])
lower_y = float(y_boundaries.group(1))
upper_y = float(y_boundaries.group(2))
area_y = upper_y - lower_y
axis = plt.gca()
else:
x = rule.antecedent[1]
x_boundaries = re.search(interval_regex, x[1])
lower_x = float(x_boundaries.group(1))
upper_x = float(x_boundaries.group(2))
area_x = upper_x - lower_x
axis = plt.gca()
class_name = rule.consequent[1]
axis.add_patch(
patches.Rectangle((lower_x, lower_y), area_x, area_y, zorder=-2, facecolor=rule_appearance[class_name], alpha=rule.confidence)
)
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases.png")
print("rule count", len(rules))
movies_discr.head()
plt.figure(figsize=(10, 5))
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)', fontsize=20)
plt.ylabel('A-List Celebrities', fontsize=20)
plt.savefig("../data/datacases_discr.png")
print("rule count", len(rules))
from matplotlib2tikz import save as tikz_save
subplot_count = 1
plt.style.use("seaborn-white")
fig, ax = plt.subplots(figsize=(40, 60))
ax.set_xlabel('Estimated Budget (1000$)')
ax.set_ylabel('A-List Celebrities')
for idx, r in enumerate(sorted(rules, reverse=True)):
plt.subplot(7, 4, idx + 1)
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=30)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("r{}".format(idx), fontsize=40)
plt.savefig("../data/rule_plot.png")
print(len(transactionDB))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clfm1 = M1Algorithm(rules, transactionDB).build()
print(len(clfm1.rules))
clf = M1Algorithm(rules, transactionDB).build()
for r in clf.rules:
plot_rule(r, plt)
# data cases
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
# rule boundary lines
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel('Estimated Budget (1000$)')
plt.ylabel('A-List Celebrities')
clfm1 = M1Algorithm(rules, transactionDB).build()
fig, ax = plt.subplots(figsize=(40, 24))
for idx, r in enumerate(clfm1.rules):
plt.subplot(3, 4, idx + 1)
for rule in clfm1.rules[:idx+1]:
plot_rule(rule, plt)
#plot_rule(r, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m1_rules.png")
len(clfm1.rules)
m2 = M2Algorithm(rules, transactionDB)
clfm2 = m2.build()
fig, ax = plt.subplots(figsize=(40, 16))
for idx, r in enumerate(clfm2.rules):
plt.subplot(2, 5, idx + 1)
for rule in clfm2.rules[:idx+1]:
plot_rule(rule, plt)
for i in range(len(x_points)):
plt.scatter(x_points[i], y_points[i], marker=appearance[data_class[i]][1], color=appearance[data_class[i]][0], s=60)
for i, n in enumerate(x):
plt.axhline(y=y[i], color = "grey", linestyle="dashed")
plt.axvline(x=x[i], color = "grey", linestyle="dashed")
plt.xlabel("Krok {}".format(idx + 1), fontsize=40)
plt.savefig("../data/m2_rules.png")
len(clfm2.rules)
clfm2.inspect().to_csv("../data/rulesframe.csv")
import sklearn.metrics as skmetrics
m1pred = clfm1.predict_all(transactionDB)
m2pred = clfm2.predict_all(transactionDB)
actual = transactionDB.classes
m1acc = skmetrics.accuracy_score(m1pred, actual)
m2acc = skmetrics.accuracy_score(m2pred, actual)
print("m1 acc", m1acc)
print("m2 acc", m2acc)
clfm1.rules == clfm2.rules
```
| github_jupyter |
```
import os
import cv2
import math
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, fbeta_score
from keras import optimizers
from keras import backend as K
from keras.models import Sequential, Model
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, EarlyStopping
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(0)
seed(0)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
train = pd.read_csv('../input/imet-2019-fgvc6/train.csv')
labels = pd.read_csv('../input/imet-2019-fgvc6/labels.csv')
test = pd.read_csv('../input/imet-2019-fgvc6/sample_submission.csv')
train["attribute_ids"] = train["attribute_ids"].apply(lambda x:x.split(" "))
train["id"] = train["id"].apply(lambda x: x + ".png")
test["id"] = test["id"].apply(lambda x: x + ".png")
print('Number of train samples: ', train.shape[0])
print('Number of test samples: ', test.shape[0])
print('Number of labels: ', labels.shape[0])
display(train.head())
display(labels.head())
```
### Model parameters
```
# Model parameters
BATCH_SIZE = 128
EPOCHS = 30
LEARNING_RATE = 0.0001
HEIGHT = 299
WIDTH = 299
CANAL = 3
N_CLASSES = labels.shape[0]
ES_PATIENCE = 3
DECAY_DROP = 0.5
DECAY_EPOCHS = 10
def f2_score_thr(threshold=0.5):
def f2_score(y_true, y_pred):
beta = 2
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())
true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)
predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)
possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))
return f2_score
def custom_f2(y_true, y_pred):
beta = 2
tp = np.sum((y_true == 1) & (y_pred == 1))
tn = np.sum((y_true == 0) & (y_pred == 0))
fp = np.sum((y_true == 0) & (y_pred == 1))
fn = np.sum((y_true == 1) & (y_pred == 0))
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15)
return f2
def step_decay(epoch):
initial_lrate = LEARNING_RATE
drop = DECAY_DROP
epochs_drop = DECAY_EPOCHS
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25)
train_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
target_size=(HEIGHT, WIDTH),
subset='training')
valid_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
target_size=(HEIGHT, WIDTH),
subset='validation')
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/imet-2019-fgvc6/test",
x_col="id",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None)
```
### Model
```
def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = applications.Xception(weights=None, include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/xception/xception_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='sigmoid', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
# warm up model
# first: train only the top layers (which were randomly initialized)
model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5,0):
model.layers[i].trainable = True
optimizer = optimizers.Adam(lr=LEARNING_RATE)
metrics = ["accuracy", "categorical_accuracy"]
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE)
callbacks = [es]
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
```
#### Train top layers
```
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
#### Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
metrics = ["accuracy", "categorical_accuracy"]
lrate = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=(ES_PATIENCE))
callbacks = [es]
optimizer = optimizers.Adam(lr=0.0001)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
### Complete model graph loss
```
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))
ax1.plot(history.history['loss'], label='Train loss')
ax1.plot(history.history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history.history['acc'], label='Train Accuracy')
ax2.plot(history.history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')
ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')
ax3.legend(loc='best')
ax3.set_title('Cat Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### Find best threshold value
```
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
print(lastFullValPred.shape, lastFullValLabels.shape)
def find_best_fixed_threshold(preds, targs, do_plot=True):
score = []
thrs = np.arange(0, 0.5, 0.01)
for thr in thrs:
score.append(custom_f2(targs, (preds > thr).astype(int)))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')
if do_plot:
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14);
plt.show()
return best_thr, best_score
threshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)
```
### Apply model to test set and output predictions
```
test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)
predictions = []
for pred_ar in preds:
valid = []
for idx, pred in enumerate(pred_ar):
if pred > threshold:
valid.append(idx)
if len(valid) == 0:
valid.append(np.argmax(pred_ar))
predictions.append(valid)
filenames = test_generator.filenames
label_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices}
results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})
results['id'] = results['id'].map(lambda x: str(x)[:-4])
results['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x)))
results["attribute_ids"] = results["attribute_ids"].apply(lambda x: ' '.join(x))
results.to_csv('submission.csv',index=False)
results.head(10)
```
| github_jupyter |
# Training with SageMaker Pipe Mode and TensorFlow using the SageMaker Python SDK
SageMaker Pipe Mode is an input mechanism for SageMaker training containers based on Linux named pipes. SageMaker makes the data available to the training container using named pipes, which allows data to be downloaded from S3 to the container while training is running. For larger datasets, this dramatically improves the time to start training, as the data does not need to be first downloaded to the container. To learn more about pipe mode, please consult the AWS documentation at: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html#your-algorithms-training-algo-running-container-trainingdata.
In this tutorial, we show you how to train a tf.estimator using data read with SageMaker Pipe Mode. We'll use the SageMaker `PipeModeDataset` class - a special TensorFlow `Dataset` built specifically to read from SageMaker Pipe Mode data. This `Dataset` is available in our TensorFlow containers for TensorFlow versions 1.7.0 and up. It's also open-sourced at https://github.com/aws/sagemaker-tensorflow-extensions and can be built into custom TensorFlow images for use in SageMaker.
Although you can also build the PipeModeDataset into your own containers, in this tutorial we'll show how you can use the PipeModeDataset by launching training from the SageMaker Python SDK. The SageMaker Python SDK helps you deploy your models for training and hosting in optimized, production ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet.
Different collections of S3 files can be made available to the training container while it's running. These are referred to as "channels" in SageMaker. In this example, we use two channels - one for training data and one for evaluation data. Each channel is mapped to S3 files from different directories. The SageMaker PipeModeDataset knows how to read from the named pipes for each channel given just the channel name. When we launch SageMaker training we tell SageMaker what channels we have and where in S3 to read the data for each channel.
## Setup
The following code snippet sets up some variables we'll need later on. Please provide an S3 bucket that a TensorFlow training script and training output can be stored in.
```
from sagemaker import get_execution_role
#Bucket location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://<bucket-name>/customcode/tensorflow_pipemode'
#Bucket location where results of model training are saved.
model_artifacts_location = 's3://<bucket-name>/artifacts'
#IAM execution role that gives SageMaker access to resources in your AWS account.
role = get_execution_role()
```
## Complete training source code
In this tutorial we train a TensorFlow LinearClassifier using pipe mode data. The TensorFlow training script is contained in following file:
```
!cat "pipemode.py"
```
### Using a PipeModeDataset in an input_fn
To train an estimator using a Pipe Mode channel, we must construct an input_fn that reads from the channel. To do this, we use the SageMaker PipeModeDataset. This is a TensorFlow Dataset specifically created to read from a SageMaker Pipe Mode channel. A PipeModeDataset is a fully-featured TensorFlow Dataset and can be used in exactly the same ways as a regular TensorFlow Dataset can be used.
The training and evaluation data used in this tutorial is synthetic. It contains a series of records stored in a TensorFlow Example protobuf object. Each record contains a numeric class label and an array of 1024 floating point numbers. Each array is sampled from a multi-dimensional Gaussian distribution with a class-specific mean. This means it is possible to learn a model using a TensorFlow Linear classifier which can classify examples well. Each record is separated using RecordIO encoding (though the PipeModeDataset class also supports the TFRecord format as well).
The training and evaluation data were produced using the benchmarking source code in the sagemaker-tensorflow-extensions benchmarking sub-package. If you want to investigate this further, please visit the GitHub repository for sagemaker-tensorflow-extensions at https://github.com/aws/sagemaker-tensorflow-extensions.
The following example code shows how to use a PipeModeDataset in an input_fn.
```
from sagemaker_tensorflow import PipeModeDataset
def input_fn():
# Simple example data - a labeled vector.
features = {
'data': tf.FixedLenFeature([], tf.string),
'labels': tf.FixedLenFeature([], tf.int64),
}
# A function to parse record bytes to a labeled vector record
def parse(record):
parsed = tf.parse_single_example(record, features)
return ({
'data': tf.decode_raw(parsed['data'], tf.float64)
}, parsed['labels'])
# Construct a PipeModeDataset reading from a 'training' channel, using
# the TF Record encoding.
ds = PipeModeDataset(channel='training', record_format='TFRecord')
# The PipeModeDataset is a TensorFlow Dataset and provides standard Dataset methods
ds = ds.repeat(20)
ds = ds.prefetch(10)
ds = ds.map(parse, num_parallel_calls=10)
ds = ds.batch(64)
return ds
```
# Running training using the Python SDK
We can use the SDK to run our local training script on SageMaker infrastructure.
1. Pass the path to the pipemode.py file, which contains the functions for defining your estimator, to the sagemaker.TensorFlow init method.
2. Pass the S3 location that we uploaded our data to previously to the fit() method.
```
from sagemaker.tensorflow import TensorFlow
tensorflow = TensorFlow(entry_point='pipemode.py',
role=role,
input_mode='Pipe',
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
training_steps=1000,
evaluation_steps=100,
train_instance_type='ml.c4.xlarge')
```
After we've created the SageMaker Python SDK TensorFlow object, we can call fit to launch TensorFlow training:
```
%%time
import boto3
# use the region-specific sample data bucket
region = boto3.Session().region_name
train_data = 's3://sagemaker-sample-data-{}/tensorflow/pipe-mode/train'.format(region)
eval_data = 's3://sagemaker-sample-data-{}/tensorflow/pipe-mode/eval'.format(region)
tensorflow.fit({'train':train_data, 'eval':eval_data})
```
After ``fit`` returns, you've successfully trained a TensorFlow LinearClassifier using SageMaker pipe mode! The TensorFlow model data will be stored in '``s3://<bucket-name>/artifacts``' - where '``<bucket-name>``' is the name of the bucket you supplied earlier.
| github_jupyter |
# Welcome to BlazingSQL Notebooks!
BlazingSQL Notebooks is a fully managed, high-performance JupyterLab environment.
**No setup required.** You just login and start writing code, immediately.
Every Notebooks environment has:
- An attached CUDA GPU
- Pre-Installed GPU Data Science Packages ([BlazingSQL](https://github.com/BlazingDB/blazingsql), [RAPIDS](https://github.com/rapidsai), [Dask](https://github.com/dask), and many more)
Start running GPU-accelerated code below!
## The GPU DataFrame
The RAPIDS ecosystem is built on the concept of a shared GPU DataFrame, built on [Apache Arrow](http://arrow.apache.org/), between all of the different libraries and packages. This was achieved with the `cudf.DataFrame`.
There are two libraries specific to data manipulation:
- **BlazingSQL**: SQL commands on a `cudf.DataFrame`
- **cuDF**: pandas-like commands on a `cudf.DataFrame`
### BlazingSQL (BSQL)
[GitHub](https://github.com/BlazingDB/blazingsql) | [Intro Notebook](intro_notebooks/the_dataframe.ipynb)
BlazingSQL is a distributed SQL engine built on top of cuDF. Easily run SQL on files and DataFrames.
We start with a BlazingContext, which acts like a session of the SQL engine.
```
import dask
from dask.distributed import Client
dask_scheduler_ip_port = '172.31.12.105:8786'
client = Client(dask_scheduler_ip_port)
client
from blazingsql import BlazingContext
network_interface = 'ens5'
bc = BlazingContext(dask_client=client, network_interface=network_interface)
```
With `.create_table('table_name', 'file_path')` you can create tables from many formats. Here we infer the schema from a CSV file.
```
bc.create_table('taxi', 'data/sample_taxi.csv', header=0)
```
Now, we can run a SQL query directly on that CSV file with `.sql()`.
```
bc.sql('SELECT * FROM taxi').compute()
```
Learn more about [creating](https://docs.blazingdb.com/docs/creating-tables) and [querying](https://docs.blazingdb.com/docs/single-gpu) BlazingSQL tables, or the [BlazingContext API](https://docs.blazingdb.com/docs/methods-arguments).
BlazingSQL returns each query's results as a cuDF DataFrame, making for easy handoff to GPU or non-GPU solutions.
```
type(bc.sql('select * from taxi limit 10'))
```
### cuDF
[GitHub](https://github.com/rapidsai/cudf) | [Intro Notebook](intro_notebooks/the_dataframe.ipynb)
cuDF is a GPU DataFrame Library similar to pandas.
```
import cudf
s = cudf.Series([3, 5, 0.01, None, 4])
s
```
You can make a `cudf.DataFrame` from a SQL statement, each column being a `cudf.Series`.
```
df = bc.sql('select * from taxi where trip_distance < 10')
```
Utilize DataFrame methods like `.head()`, `.tail()`, or `.describe()`.
```
df.tail(2)
df.describe().compute()
```
You can also filter cuDF DataFrames just like pandas DataFrames.
```
df.loc[(df['passenger_count'] != 1) & (df['trip_distance'] < 10)].compute()
```
To ensure interoperability, you can also easily convert from cuDF to pandas with `.to_pandas()`. This grants you access to all pandas methods, in this example, `.sample()`.
```
df.compute().to_pandas().sample(3)
```
Learn more about [BlazingSQL + cuDF](intro_notebooks/the_dataframe.ipynb).
## Data Visualization
cuDF DataFrames easily plug into current and GPU-accelerated visualization.
### Matplotlib
[GitHub](https://github.com/matplotlib/matplotlib) | [Intro Notebook](intro_notebooks/data_visualization.ipynb#Matplotlib)
Calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and hand off to Matplotlib or other CPU visualization packages.
```
bc.sql('SELECT passenger_count, tip_amount FROM taxi').compute().to_pandas().plot(kind='scatter', x='passenger_count', y='tip_amount')
```
### Datashader
[GitHub](https://github.com/holoviz/datashader) | [Intro Notebook](intro_notebooks/data_visualization.ipynb#Datashader)
Datashader is a data rasterization pipeline for automating the process of creating meaningful representations of large amounts of data.
Datashader is one of the first visualization tools to support GPU DataFrames, so we can directly pass in `cudf.DataFrame` query results.
```
from datashader import Canvas, transfer_functions
from colorcet import fire
```
We execute and pass a query as a GPU DataFrame to datashader to render taxi dropoff locations.
```
nyc = Canvas().points(bc.sql('SELECT dropoff_x, dropoff_y FROM taxi').compute(), 'dropoff_x', 'dropoff_y')
transfer_functions.set_background(transfer_functions.shade(nyc, cmap=fire), "black")
```
## Machine Learning
### cuML
[GitHub](https://github.com/rapidsai/cuml) | [Intro Notebook](intro_notebooks/machine_learning.ipynb)
cuML is a GPU-accelerated machine learning library similar to scikit-learn but made to run on GPU.
Let's predict fare amount of the `taxi` table we've been querying with a linear regression model.
```
%%time
from cuml.linear_model import LinearRegression
from cuml.preprocessing.model_selection import train_test_split
```
Pull feature (X) and target (y) values
```
X = bc.sql('SELECT trip_distance, tolls_amount, pickup_x, pickup_y, dropoff_x, dropoff_y FROM taxi')
y = bc.sql('SELECT fare_amount FROM taxi')['fare_amount']
```
Split data into train and test sets (80:20)
```
X_train, X_test, y_train, y_test = train_test_split(X.compute(), y.compute(), train_size=0.8)
```
Run a Linear Regression Model.
```
%%time
# call Linear Regression model
lr = LinearRegression(fit_intercept=True, normalize=True)
# train the model
lr.fit(X_train, y_train)
# make predictions for test X values
y_pred = lr.predict(X_test)
```
Test the model's predicted values with sklearn's r2_score.
```
import cuml.metrics.regression as cuml_reg
print(f'R-squared: {cuml_reg.r2_score(y_test,y_pred):.3}')
print(f'MAE: {float(cuml_reg.mean_absolute_error(y_test,y_pred)):.3}')
```
## That is the Quick Tour!
There are in fact many more packages that are integrating the GPU DataFrame, and therefore providing interoperability with the rest of the stack.
Some of those not mentioned here are:
- **cuGraph**: a graph analytics library similar to NetworkX
- **cuSignal**: a signal analytics library similar to SciPy Signal
- **CLX**: a collection of cyber security use cases with the RAPIDS stack
[Continue to The DataFrame introductory Notebook](intro_notebooks/the_dataframe.ipynb)
| github_jupyter |
# Quickstart
In this example, we'll build an implicit feedback recommender using the Movielens 100k dataset (http://grouplens.org/datasets/movielens/100k/).
The code behind this example is available as a [Jupyter notebook](https://github.com/lyst/lightfm/tree/master/examples/quickstart/quickstart.ipynb)
LightFM includes functions for getting and processing this dataset, so obtaining it is quite easy.
```
import numpy as np
from lightfm.datasets import fetch_movielens
data = fetch_movielens(min_rating=5.0)
```
This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise.
We have two such matrices, a training and a testing set. Both have around 1000 users and 1700 items. We'll train the model on the train matrix but test it on the test matrix.
```
print(repr(data['train']))
print(repr(data['test']))
```
We need to import the model class to fit the model:
```
from lightfm import LightFM
```
We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highly while assigining low scores to implicit negatives.
Model training is accomplished via SGD (stochastic gradient descent). This means that for every pass through the data --- an epoch --- the model learns to fit the data more and more closely. We'll run it for 10 epochs in this example. We can also run it on multiple cores, so we'll set that to 2. (The dataset in this example is too small for that to make a difference, but it will matter on bigger datasets.)
```
model = LightFM(loss='warp')
%time model.fit(data['train'], epochs=30, num_threads=2)
```
Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. `lightfm` implements a number of metrics in the `evaluation` module.
```
from lightfm.evaluation import precision_at_k
```
We'll measure precision in both the train and the test set.
```
print("Train precision: %.2f" % precision_at_k(model, data['train'], k=5).mean())
print("Test precision: %.2f" % precision_at_k(model, data['test'], k=5).mean())
```
Unsurprisingly, the model fits the train set better than the test set.
For an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the `predict` method.
```
def sample_recommendation(model, data, user_ids):
n_users, n_items = data['train'].shape
for user_id in user_ids:
known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices]
scores = model.predict(user_id, np.arange(n_items))
top_items = data['item_labels'][np.argsort(-scores)]
print("User %s" % user_id)
print(" Known positives:")
for x in known_positives[:3]:
print(" %s" % x)
print(" Recommended:")
for x in top_items[:3]:
print(" %s" % x)
sample_recommendation(model, data, [3, 25, 450])
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
```
# EDA for the first time
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import scipy.stats as stats
```
# Iris dataset - Read the dataset from a file using Pandas
```
filename = "data/iris-data.csv"
df = pd.read_csv(filename, sep='\t')
df.head()
```
**Some problem?** Yes, data are not in colums as we expected.
```
%%bash
head data/iris-data.csv
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
df = pd.read_csv(filename, sep=',')
df.head()
```
## Identify problems in data
**The word `class` as a column name ... hm ... Is it one of the reserved Keywords in Python?**
```
df.class.unique()
```
**How can I write a such code correctly?**
```
df['class'].unique()
```
**Rename the `class` column?**
```
df.rename(columns = {'class':'species'}, inplace = True)
df.species.unique()
```
**Strange values, which look like human mistakes? Rename them? This operation *rename* can be dangerous**
```
df['species'] = df['species'].str.replace('Iris-setossa','setosa')
df['species'] = df['species'].str.replace('Iris-setosa','setosa')
df['species'] = df['species'].str.replace('Iris-versicolor','versicolor')
df['species'] = df['species'].str.replace('Iris-virginica','virginica')
df.species.unique()
```
**Shorter column names?**
```
df = df.rename({'sepal_length_cm': 'sepal_length', 'sepal_width_cm': 'sepal_width', 'petal_length_cm': 'petal_length', 'petal_width_cm': 'petal_width'}, axis='columns')
df
df.groupby('species').size()
```
**Missing values? NaN values?**
```
df.shape[0]
df.dropna().shape[0]
df.shape[0] - df.dropna().shape[0]
df.isnull().sum()
df.isnull().sum().sum()
df[df.isnull().any(axis=1)]
```
## Save the dataframe to a file ...
```
df.to_csv('data/iris-data-output.tsv', sep='\t', index=False, encoding='utf-8')
```
# Visualization: Iris dataset
```
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import scipy.stats as stats
iris = sns.load_dataset("iris")
```
### Describe the data together with their characteristics = Descriptive statistics = Deskriptívna štatistika
```
iris.shape
print(iris.head(10))
iris.info()
iris.describe()
iris.species.unique()
iris.groupby('species').size()
```
**Univariate analysis** (Mean, Median, Modus, Variance, Standard Deviation) = **Analýza jednotlivých atribútov**
```
iris['petal_length'].mean()
iris['petal_length'].median()
stats.mode(iris['petal_length'])
np.var(iris['petal_length'])
np.std(iris['petal_length'])
```
### Formulate and verify data hypotheses = Data visualization + inference statistics
```
sns.boxplot(data=iris, x="sepal_length", y="species")
sns.boxplot(data=iris, x="petal_length", y="species")
iris.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
iris.hist()
# sns.distplot(iris['petal_length'], bins=10)
sns.displot(iris['petal_length'], bins=10)
# sns.distplot(iris['petal_width'], bins=10)
sns.histplot(iris['petal_width'], bins=10)
```
### Identify relationships between attributes = Dependencies e.g. correlations = Závislosti napr. korelácie
**Bivariate analysis = Párová analýza**
```
sns.scatterplot(data=iris, x='petal_length', y='petal_width')
sns.regplot(x="petal_length", y="petal_width", data=iris)
print("Pearson correlation: %.3f" % iris.petal_length.corr(iris.petal_width))
iris.corr()
sns.pairplot(iris, hue="species")
fig, ax = plt.subplots(figsize=(10,8))
sns.heatmap(iris.corr(), ax=ax, annot=True, fmt=".3f")
sns.set(rc={'figure.figsize':(36,8)})
sns.violinplot(data=iris, x='sepal_length', y='sepal_width', hue="species")
```
### Identify problems in data = Data preprocessing
**Remove missing values?**
```
iris.shape[0]
iris.dropna().shape[0]
iris.shape[0] - iris.dropna().shape[0]
```
**Empty rows?**
```
iris.isnull()
iris[iris.isnull().any(axis=1)]
```
# Visualization: Tips dataset
```
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import scipy.stats as stats
tips = sns.load_dataset("tips")
print(tips.shape)
```
**Your code:**
| github_jupyter |
```
import astropy
import astroplan
from astroplan import Observer, FixedTarget
from astroplan.constraints import AtNightConstraint, AirmassConstraint, MoonSeparationConstraint, TimeConstraint
from astroplan import ObservingBlock
from astroplan import Transitioner
from astroplan.scheduling import Schedule
from astroplan.scheduling import SequentialScheduler
from astropy.coordinates import SkyCoord
from astroplan.scheduling import PriorityScheduler
from astroplan.plots import plot_schedule_airmass
from astropy.time import Time
import matplotlib.pyplot as plt
import astropy.units as u
#I am following along with this tutorial just to figure out what is going on:
#https://astroplan.readthedocs.io/en/latest/tutorials/scheduling.html
#I looked at the site available on astroplan
astropy.coordinates.EarthLocation.get_site_names()
#from this list, it looks like we need to add Bigelow and Lemmon.
#One can submit a pull request to the astropy-data repository
#Look under get_site_names here:
#http://docs.astropy.org/en/stable/api/astropy.coordinates.EarthLocation.html#astropy.coordinates.EarthLocation.get_site_names
kpno = Observer.at_site('kpno')
M31 = FixedTarget.from_name('M31')
print M31
SN2019np = FixedTarget(coord=SkyCoord(ra=157.341500*u.deg, dec=29.510667*u.deg),name='SN2019np')
print SN2019np
noon_before = Time('2019-02-19 19:00')
noon_after = Time('2019-02-20 19:00')
# create the list of constraints that all targets must satisfy
#global_constraints = [AirmassConstraint(max = 2.5, boolean_constraint = False),
# AtNightConstraint.twilight_civil(), MoonSeparationConstraint(min=30.*u.deg)]
# create the list of constraints that all targets must satisfy
global_constraints = [AirmassConstraint(max = 2.5, boolean_constraint = False),
AtNightConstraint.twilight_civil()]
read_out = 35 * u.second
M31_exp = 15.*u.second
SN2019np_exp = 300.*u.second
M31_n = 5
SN2019np_n = 8
blocks = []
night_start = Time('2019-02-20 02:37')
night_end = Time('2019-02-20 12:43')
tonight = TimeConstraint(night_start, night_end)
# Create ObservingBlocks for each filter and target with our time
# constraint, and durations determined by the exposures needed
for priority, bandpass in enumerate(['U', 'B', 'V', 'R', 'I']):
# We want each filter to have separate priority (so that target
# and reference are both scheduled)
#DS note: We will want to change this at some point.
b = ObservingBlock.from_exposures(M31, priority, M31_exp, M31_n, read_out,
configuration = {'filter': bandpass},
constraints = [tonight])
blocks.append(b)
b = ObservingBlock.from_exposures(SN2019np, priority, SN2019np_exp, SN2019np_n, read_out,
configuration = {'filter': bandpass},
constraints = [tonight])
blocks.append(b)
# Initialize a transitioner object with the slew rate and/or the
# duration of other transitions (e.g. filter changes)
slew_rate = .8*u.deg/u.second
transitioner = Transitioner(slew_rate,
{'filter':{('B','V'): 10*u.second,
('V','R'): 10*u.second,
'default': 30*u.second}})
print 'hello'
seq_scheduler = SequentialScheduler(constraints = global_constraints,
observer = kpno,
transitioner = transitioner)
#sequential_schedule = Schedule(noon_before, noon_after)
#seq_scheduler(blocks, sequential_schedule)
print 'yoyo'
# Initialize the priority scheduler with the constraints and transitioner
prior_scheduler = PriorityScheduler(constraints = global_constraints,
observer = kpno,
transitioner = transitioner)
# Initialize a Schedule object, to contain the new schedule
priority_schedule = Schedule(noon_before, noon_after)
# Call the schedule with the observing blocks and schedule to schedule the blocks
prior_scheduler(blocks, priority_schedule)
priority_schedule.to_table()
# plot the schedule with the airmass of the targets
plt.figure(figsize = (14,6))
plot_schedule_airmass(priority_schedule)
plt.legend(loc = "upper right")
plt.show()
```
| github_jupyter |
# In-depth example
In this notebook we will be going through an example of running ``imcascade`` in a realistic setting and discussing issues along the way
```
#Load all neccesary packages
import numpy as np
import matplotlib.pyplot as plt
import time
import sep
import astropy.units as u
from astropy.coordinates import SkyCoord
```
For this example we will be running ``imcascade`` on HSC data on a MW mass galaxy at z=0.25. The data is attained in the cell below and is retrieved using the ``unagi`` python package availible [here](<https://github.com/dr-guangtou/unagi), written by Song Huang. The cell below is used to download the data and the PSF
```
from unagi import hsc
from unagi.task import hsc_psf,hsc_cutout
pdr2 = hsc.Hsc(dr='pdr2',rerun = 'pdr2_wide')
#Downloaded from HSC archive, this a MW mass galaxy at z~0.25 at the sky location below
ra,dec = 219.36054754*u.deg, -0.40994375*u.deg
examp_coord = SkyCoord(ra = ra, dec = dec)
cutout = hsc_cutout(examp_coord, cutout_size=20*u.arcsec, filters='i', archive = pdr2, dr = 'pdr2', verbose=True, variance=True, mask=True, save_output = False)
psf = hsc_psf(examp_coord, filters='i', archive=pdr2, save_output = False)
#Retrieve science and variance images
img = cutout[1].data.byteswap().newbyteorder()
var = cutout[3].data.byteswap().newbyteorder()
psf_data = psf[0].data
```
## Setting up
### Modelling the PSF
To use ``imcascade`` while accounting for the PSF, you need to have a Gaussian decomposition of the PSF. While this is availible for some surveys, you can use the ``imcascade.psf_fitter`` module to help if you have a pixelized version.
The following function first fits the PSF profile in 1D to decide what the best widths are. Then a 2D fit is used to find the correct weights
```
from imcascade.psf_fitter import PSFFitter
psf_fitter = PSFFitter(psf_data)
psf_sig,psf_a,chi2, fig = psf_fitter.fit_N(3, plot = True) # Can choose number
print (psf_sig,psf_a)
plt.show()
```
We can see that a model with three gaussians provides a pretty good fit! Generally we find 2-3 components works well for standard ground based telescopes and for more complicated PSFs, like HST WFC3, we find 4 works well. There is some incentive to use a small number of gaussians to define the PSF as it decreasese the time to render a model image. Additionally it is good to check that the sum of the weights, ``psf_a``, is close to one. This ensures the PSF, and the fit are properly normalized
### Organizing all the inputs
First let's take a quick look at the science and variance images. We will be fitting a model to the science image and the inverse of the variance image will be used as the pixel weights when fitting
```
fig, (ax1,ax2) = plt.subplots(1,2, figsize = (10,5))
ax1.imshow(img, vmin = -0.1, vmax = 0.2)
ax1.set_title('Science Image')
ax2.imshow(var,vmin = 0, vmax = 0.005)
ax2.set_title('Variance Image')
plt.show()
```
Additionally we will be building a mask to mask contaminating sources that we don't want affecting the fit
```
# Use sep to detect sources
bkg = sep.Background(img)
x_cent,y_cent = int(img.shape[0]/2.) , int(img.shape[1]/2.)
obj,seg = sep.extract(img - bkg.back(), 1.5, err = np.sqrt(var), deblend_cont = 0.005,segmentation_map = True)
seg[np.where(seg == seg[x_cent,y_cent])] = 0
mask_raw = seg > 0
#Convolve mask with a gaussian to expand it and make sure that all low-SB emission is masked
from imcascade.utils import expand_mask
mask = expand_mask(mask_raw, radius = 1.5)
mask = np.array(mask, dtype = bool)
fig, (ax1,ax2) = plt.subplots(1,2, figsize = (10,5))
ax1.imshow(mask, vmin = -0.1, vmax = 0.2)
ax1.set_title('Mask')
ax2.imshow(img*np.logical_not(mask),vmin = -0.01, vmax = 0.02)
ax2.set_title('Masked, Streched, Science Image')
plt.show()
```
### Choosing the widths for the Gaussian components
The next major decision is what set of widths to use for the Gaussian components. In general we reccomend logarithmically spaced widths. This means there are more components are smaller radii where the signal is the largest and the profile changes the quickest. asinh scaling can also work.
Next we have to choose start and end points. This should be 0.5-1 pixel (or half the PSF width) to roughly 10 times the effective radius. The estimate of the effective radius does not need to be accurate, for example the Kron radius for sep or Sextractor works well. This should help decide the size of the cutout too. In order to properly model the sky the cutoutsize should be at least 3-4 times larger then the largest width, so 30-40 times the effective radius.
Finally we have to choose the number of components. In our testing somewhere around 9-11 seems to work.
.. note::
These decisions are not trivial and can affect on the outcome of an ``imcascade`` fit. However reasonable changes withn the confines discussed here shouldn't greatly affect the results. You should run tests to ensure you have chosen a reliable set of widths. If the results are very sensitive to the choice of widths, you should be wary and there may be other issues at play.
In this example we estimate the effective radius to be roughly 6 pixels so we use 9 components with logarithmically spaced widths from 1 pixels to 60 pixels (~10 x r eff) and use a cutout size of 240 pixels, roughly 40 times the effective radius.
```
sig = np.logspace(np.log10(1),np.log10(60), num = 9)
```
We can also specify inital conditions to help make inital guesses. Here we specify the estimated half light radii and total flux. The code make some intelligent guesses on the inital conditions and the bounds but this may help ensure a quick and reliable fit. It is also possible to specify guessea and bounds for individual components, sky values etc. but this is more involved. See the user's guide for more details
```
init_dict = {'re':6., 'flux': 1000.}
```
## Running ``imcascade``
### Least squares-minimization
To run imcascade we first need to intialize a ``Fitter`` instance with all the inputs discussed above. This class organizes all the data and contains all the methods used to fit the image
```
from imcascade import Fitter
fitter = Fitter(img,sig, psf_sig, psf_a, weight = 1./var, mask = mask, init_dict = init_dict)
```
Now we can run least squares minimization using the command below
```
min_res = fitter.run_ls_min()
## Let's take a quick look at the best fit-model to make sure it looks god
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize = (15,5))
best_fit_mod = fitter.make_model(min_res.x) #generate a model image using the best fitting parameters
#Calculate the residual image and mask out the other sources
resid = (fitter.img - best_fit_mod)/best_fit_mod
resid *= np.logical_not(fitter.mask)
ax1.set_title('Data')
ax1.imshow(np.log10(fitter.img*np.logical_not(mask)),vmin = -2, vmax =1)
ax2.set_title('Best Fit Model')
ax2.imshow(np.log10(best_fit_mod),vmin = -2, vmax =1)
ax3.set_title('Residual')
ax3.imshow(resid, vmin = -0.5, vmax = 0.5, cmap = 'seismic')
#ax3.set_xlim([50,150])
#ax3.set_ylim([150,50])
plt.show()
```
While there are some asymmetric features that can't be described by axisymmetric model the fit overall looks pretty good.
The best fit parameters can be also be accessed,
```
fitter.min_res.x
```
They are, in order, $x_0$,$y_0$, axis ratio and position angle. Than the next 9 values are the best fit weights for the Gaussian components. Note that by default ``imcascade`` explores these in log scale. This can be changes by passing the option ``log_weight_scale = False`` when intializing. The final three parameters describe the best fit tilted-plane sky model. The can also be disabled when intializing with ``sky_model = False``.
These aren't super easy to parase as is, which is why we using the ``ImcascadeResults`` class discussed below.
### Posterior estimation
Below we show the commands that could be used to use Bayesian techniques to explore the posterior distributions. Specifically we are using the method discussed in the paper based on pre-rendered images. There are additional options when running ``dynesty`` using ``imcascade``. Specifically we offer two choices of priors, the default is based on the results of the least-squares minimization, the other being uniform priors. We found the former runs quicker and more reliably as the priors are not as broad. It is also possible to set you own priors, see the Advanced section for more details.
```python
> fitter.run_dynesty(method = 'express')
> fitter.save_results('./examp_results.asdf', run_basic_analysis = False)
```
This is much quicker than the traditional method. However it still took about 30 min to run on my laptop. So I have run it previously and we will load the saved data.
## Analyzing the results
Since when using ``imcascade`` the paramters that are fit are the fluxes of each Gaussian component, the analysis is more involved then other parameterized models, which fit directly for the total flux, radius etc. To assist in this we have written the ``results`` module and the ``ImcascadeResults`` class. This class can be initialized multiple ways. First you can pass it a ``Fitter`` instance after running ``run_ls_min()`` and/or ``run_dynesty()``. Alternatively it can be passed a string which denotes the location of an ASDF file of the saved results
```
from imcascade import ImcascadeResults
#Initialized using fitter instance
ls_res = ImcascadeResults(fitter)
#Initialized with saved file
dyn_res = ImcascadeResults('examp_results.asdf')
```
``ImcascadeResults`` will default to using the posterior to derrive morphological parameters if it is availible. For ``ls_res`` since we did not run any bayesian method, it will use the best fit parameters. Note that without a bayesian method no measurement errors will be displayed.
There are a number of functions we have written to calculate various morpological quantities, please see the API reference for all functions. For a lot of applications, one can simple run ``run_basic_analysis()`` which calculates a series of common morpological quantities
```
#For ls_res we can see there are no error bars. By specifying a zpt, we can also calculate the total magnitude
ls_res.run_basic_analysis(zpt = 27)
#we can also specify the percentiles used to calculate the error bars, here we use the 5th-95th percentile
dyn_res.run_basic_analysis(zpt = 27, errp_lo = 5, errp_hi = 95)
```
In addition to these integrated quantities, we can calculate the surface brightness profile and curve-of-growth as a function of semi-major axis
```
#Sets of radii to calculate profiles at
rplot = np.linspace(0, 50, num = 200)
sbp_ls_tot = ls_res.calc_sbp(rplot)
# Using return_ind we return the contributions of each individual gaussian component
sbp_ls_ind = ls_res.calc_sbp(rplot,return_ind = True)
```
If you use ``dyn_res`` where the postior distribution is availible it will return a 2D array containing the SBP of each sample of the posterior.
```
#Here we will calculate the posterier distribution of the surface_brightness profile
sbp_all = dyn_res.calc_sbp(rplot)
print (sbp_all.shape)
#Here we calculate the curve-of-growth for the posterier
cog_all = dyn_res.calc_cog(rplot)
```
Now we plot both, the red lines show the individual components from the least squares solution and all the black lines show individual samples from the posterior
```
fig, (ax1,ax2) = plt.subplots(1,2, figsize = (12,5))
ax1.plot(rplot, sbp_all[:,::100], 'k-', alpha = 0.05)
ax1.plot(rplot, sbp_ls_ind, 'r--', alpha = 1.)
ax1.set_yscale('log')
ax1.set_ylim([5e-4,5e1])
ax1.set_title('Surface Brightness Profile')
ax1.set_xlabel('Radius (pixels)')
ax1.set_ylabel('Intensity')
ax2.plot(rplot,cog_all[:,::100], 'k-', alpha = 0.05)
ax2.set_title('Curve-of-growth')
ax2.set_xlabel('Radius (pixels)')
ax2.set_ylabel(r'$F(<R)$')
plt.show()
```
If you are interested in morphological quantity that is not included, it is likely that it will be easy to calculate and code up, so please contact us!
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.